Decouple config files from HiveMQ Docker Image

I would like to decouple configuration files from HiveMQ Docker Image using ConfigMap.

I am using below deployment manifest. How and where i should use configMap service in manifest.

apiVersion: apps/v1
kind: Deployment
metadata:
name: hivemq-cluster
labels:
app: hivemq-cluster
spec:
replicas: 3
selector:
matchLabels:
app: hivemq-cluster
template:
metadata:
labels:
app: hivemq-cluster
cluster: hivemq
spec:
containers:
- name: hivemq-pods
image: hivemq-421
imagePullPolicy: IfNotPresent
resources:
limits:
memory: “8Gi”
cpu: “2000m”
requests:
memory: “8Gi”
cpu: “2000m”
ports:
- containerPort: 8080
protocol: TCP
name: web-ui
- containerPort: 8000
protocol: TCP
name: websocket
- containerPort: 1883
protocol: TCP
name: mqtt
env:
- name: HIVEMQ_DNS_DISCOVERY_ADDRESS
value: “hivemq-discovery.hivemq.svc.cluster.local.”
- name: HIVEMQ_DNS_DISCOVERY_TIMEOUT
value: “30”
- name: HIVEMQ_DNS_DISCOVERY_INTERVAL
value: “31”
readinessProbe:
tcpSocket:
port: 8000
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 60
livenessProbe:
tcpSocket:
port: 8000
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 60
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- |
sysctl -w fs.file-max=5097152
sysctl -w fs.nr_open=1000000
sysctl -w net.core.netdev_max_backlog=32768
sysctl -w net.core.rmem_default=524288
sysctl -w net.core.wmem_default=524288
sysctl -w net.core.rmem_max=67108864
sysctl -w net.core.wmem_max=67108864
sysctl -w net.core.somaxconn=32768
sysctl -w net.ipv4.tcp_max_syn_backlog=16384
sysctl -w net.ipv4.tcp_rmem=‘4096 87380 16777216’
sysctl -w net.ipv4.tcp_wmem=‘4096 65536 16777216’
sysctl -w net.ipv4.tcp_tw_recycle=1
sysctl -w net.ipv4.tcp_tw_reuse=1
sysctl -w net.ipv4.tcp_fin_timeout=30
sysctl -w net.ipv4.ip_local_port_range=‘1024 65535’
securityContext:
privileged: true
nodeSelector:
kubernetes.io/hostname: k8s-worker-1.novalocal

kind: Service
apiVersion: v1
metadata:
name: hivemq-discovery
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: “true”
spec:
selector:
app: hivemq-cluster
ports:
- name: mqtt
protocol: TCP
port: 1883
targetPort: 1883
- name: websocket
protocol: TCP
port: 8000
targetPort: 8000
clusterIP: None

I have used below link for ConfigMap but its not working for HiveMQ.

Configure a Pod to Use a ConfigMap | Kubernetes

Please help me out to get decouple configuration and logback file of hivemq docker image.

Thanks

I have tried to add below parameters in deployment manifest but getting “Read-only file system” error in pods logs.

    volumeMounts:
    - name: hivemq-volume-2
      mountPath: /opt/hivemq/conf/config.xml
      subPath: config.xml
  volumes:
  - name: hivemq-volume-2
    configMap:
      name: hivemq-config

Getting below error in hivemq pod logs after deployment.

Getting bind address from container hostname
set bind address from container hostname to 10.47.0.1
chown: changing ownership of ‘/opt/hivemq/conf/config.xml’: Read-only file system

The way we handle this in the Kubernetes image is by creating a override directory from which we symlink the files to the actual conf directory. This would require an additional pre-entry script in your case, thus also requiring to use another image.
You could try using the hivemq/hivemq4:k8s-4.3.5 image and set the mountPath to /conf-override/conf instead (no subPath required)
Or you could also try our new HiveMQ Kubernetes Operator EAP, which makes deploying HiveMQ on K8s much easier (and should allow for all the customization you need):

By the way, it is risky to run Pods exposed to external traffic as privileged: true, you should consider the sysctl values at the node level if possible.

We have took licence of hivemq-4.2.1 and all performance and bench marking testing done on it. Is it possible to do on same version?

Or any other way to do some customization on same manifest.

We will check later because helm chart used and we want to go with previous manifest.

What changes need to be modify in pre-entry.sh script.

Below is contents of pre-entry.sh in hivemq-4.3.5 image.

#!/usr/bin/env bash

if [[ “${HIVEMQ_CLUSTER_TRANSPORT_TYPE}” == “UDP” ]]; then
# shellcheck disable=SC2016
sed -i -e ‘s|–TRANSPORT_TYPE–|${HIVEMQ_BIND_ADDRESS}${HIVEMQ_CLUSTER_PORT}false|’ /opt/hivemq/conf/config.xml
elif [[ “${HIVEMQ_CLUSTER_TRANSPORT_TYPE}” == “TCP” ]]; then
# shellcheck disable=SC2016
sed -i -e ‘s|–TRANSPORT_TYPE–|${HIVEMQ_BIND_ADDRESS}${HIVEMQ_CLUSTER_PORT}|’ /opt/hivemq/conf/config.xml
fi

exec /opt/docker-entrypoint.sh “$@”

The main script of interest is the pre-entry_1.sh:
docker run hivemq/hivemq4:k8s-4.3.4 cat /opt/pre-entry_1.sh

It is not easily adaptable for your use case though, as it uses a shared volume with an initContainer that initializes the folder structure.
For your case something like this as a pre-entry should work:

mkdir -p /conf-override/conf
cp -rsv /conf-override/* /opt/hivemq/

And then mount your config to /conf-override/conf

I have tried in deployment manifest. but not working.

  initContainers:
  - name: init
    image: busybox
    imagePullPolicy: IfNotPresent
    command:
    - /bin/sh
    - -c
    - |
      mkdir -p /conf-override/conf
      cp -rsv /conf-override/* /opt/hivemq/
    volumeMounts:
    - name: hivemq-volume-2
      mountPath: /conf-override/conf
  volumes:
  - name: hivemq-volume-2
    configMap:
      name: hivemq-config

Is it right way to do? or i missing something else.

Below are the errors

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hivemq-cluster-687b59cbd7-j76b8 0/1 Init:CrashLoopBackOff 6 7m45s
hivemq-cluster-687b59cbd7-pfx5b 0/1 Init:CrashLoopBackOff 6 7m45s
hivemq-cluster-687b59cbd7-zs9mx 0/1 Init:CrashLoopBackOff 6 7m45s

$ kubectl logs hivemq-cluster-687b59cbd7-j76b8
Error from server (BadRequest): container “hivemq-sit-pods” in pod “hivemq-cluster-687b59cbd7-j76b8” is waiting to start: PodInitializing

An initContainer has its own, separate file system from the actual hivemq container. The commands i specified above need to run in the hivemq container itself, which is why you will need to use a separate pre-entry script like we did in the DNS discovery image, that will create the folders and symlinks. You can only add a pre-entry script by creating your own hivemq image with the dns image as a base and adding the pre-entry script to that image, which will then exec the DNS pre-entry at the end.

So decoupling configs of hivemq and extensions is not easy with hivemq dns image (I mean own customization with docker hivemq dns image not easy in kubernetes cluster.)

If i use hivemq base image for pre-entry script then hivemq clustering also need to enable with dns discovery extension. but i have not done it. It will be more changes in manifest files of k8s.

Yes, unfortunately in this case it’ll be a bit complicated. The root cause here lies within the base image, so as i see it, your best bet is building a slightly customized image with a pre-entry script from the dns image.

We will make sure to improve how we handle readOnly volumes soon, but i’m afraid we likely won’t be able to re-release the 4.2.1 image you need because changing the underlying layers / scripts may very well lead to problems for other users that are also using the image in production right now.

In shorts customization is not possible with previous blog and need more modification and its complicated. All configs of hivemq and extensions hard coded in docker image.

But why we are getting ReadOnly FileSystem error while mounting volume in same path.

chown: changing ownership of ‘/opt/hivemq/conf/config.xml’: Read-only file system

So we need to go with newer blog which you have share for better customization and automation of hivemq deployment on k8s cluster.

Please go and make a changes in latest hivemq version and let us know. It will be good if ReadyOnly issue resolve.

Hi @rp85,

You opened a ticket in the HiveMQ Community Forum, specifically the HiveMQ Community Edition sub forum.
Our team and community are, as you see very helpful and provided you with a quick solution in using our k8s operator.

It sounds to me like you are looking to get commercial support for a commercial HiveMQ license. In this case please reach out via the appropriate support channels.

PS: HiveMQ has a minimum requirement of 4 CPUs per instance

resources:
limits:
memory: “8Gi”
cpu: “2000m”
requests:
memory: “8Gi”
cpu: “2000m”

Is not a viable HiveMQ setup.

Thank you and regards,
Florian

Hi @hivemq-support,

We are trying to use latest HiveMQ + Kunernetes blog to deploy HiveMQ Cluster using adapt manifest method and deployed HiveMQ Operator and CRD

Using below link deploy HiveMQ Operator,

https://github.com/hivemq/helm-charts/tree/master/manifests

but not found HiveMQ Cluster deployment steps like provided in previous blog

How to run a HiveMQ cluster with Docker and Kubernetes

how to expose ports outside k8s clsuer using NodePort service, we are not using load balancer service

We are already using commercial product and HiveMQ deployed on VM and now we are going to move on kubernetes platform.

Hence need your help to how to deploy on HiveMQ cluster on k8s as you provide good steps in previous blog and we had followed too but customization is difficult in that.

Thanks
RP

Hello RP,

You can find detailed steps for your HiveMQ Cluster deployment with k8s in our user guide.

Kind regards,
Florian

I went through documents but not much details available. Only extensions info given but how to deploy HiveMQ cluster and how to expose service using NodePort not priovided.

Hi RP,

Did you check this part?

It shows how to install a HiveMQ Cluster, with an exposed service in 1 step, using a single config file.

Again, this is the HiveMQ community forum. In case you are looking for extended commercial support for your production environment, feel free to reach out via the proper channels.

Thank you and kind regards,
Florian

Good information thanks for sharing
Telecommunication Specialist