Sysctl for HiveMQ Container/Pod on K8s

Hi Team,

I have deployed HiveMQ on Kubernetes cluster but having some connection limitation while sending 120k concurrent connection with SDK perf mqtt tool but hivemq cluster reaching only upto 80k connection and connection getting disconnected/dropped once they reached at 80k.

Is there any way to set sysctl on hivemq pods level on kubernetes cluster?

I have refered below link to setup hivemq cluster on k8s.
https://www.hivemq.com/blog/hivemq-cluster-docker-kubernetes/

How to get benchmarking hivemq application on pod level? Have referred below link but its for VM level/OS level.

https://stackoverflow.com/questions/29106219/which-mqtt-server-for-1m-connections

Can we set thease parameters on worker node? and is it parameter reflected on pod level if applied on worker node?

Please help me and provide better solution for hivemq conatinerized on k8s.

Thanks

Hey,

unfortunately this is one of the downsides of container environments, but there’s multiple ways to deal with it, albeit not all of them are very straightforward.

The simplest solution would be to scale out your testing tool clients horizontally. I have not used SDKperf, so i don’t know if this is possible for your tool, but in general this leads to more source addresses being used which increases the total amount of connection tuples a single HiveMQ container can handle. We have done tests with 200k+ connections on Kubernetes successfully this way.

On Kubernetes, the relevant sysctls from your post (in the net namespace) cannot be set in containers by default.
See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#enabling-unsafe-sysctls for more information.

You could definitely try setting your kubelet arguments as described in the documentation above. The net.* sysctl parameters are all namespaced by cgroups, so allowing them explicitly should still be somewhat safe for your nodes. (Although of course some of these sysctl parameters allow the pod to allocate more memory in kernel space so be sure to monitor your system memory and kubelet on the hosts if you run into issues)

Another approach would be of course to set the sysctls directly on your Kubernetes nodes, as the pods will inherit these values upon startup, but i’m assuming that’s not what you’re going for.

On a side note: Make sure to set service.spec.externalTrafficPolicy to Local in the service spec your testing tool is connecting to, to make sure the source IPs are preserved!

3 Likes