HiveMQ MQTT Client 10K client connection

I am developing a client program which uses Mqtt3AsyncClient and trying to connect more than 10,000 client connections. Upon reaching 9,200+ connections, the client slows down and not able to complete 10k connections.

Im running on a server having 32 CPUs and 32 GB RAM, Red Hat Enterprise Linux Server release 7.9, with Java™ SE Runtime Environment (build 1.8.0_221).

I have already overriden the RHEL default system limits.

ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 127883
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1048576
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 127883
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

I have fined tuned OS TCP settings as stated on this link
Linux Test/Measurement hosts

net.core.rmem_max = 536870912
net.core.wmem_max = 536870912
net.ipv4.tcp_rmem = 4096 87380 268435456
net.ipv4.tcp_wmem = 4096 65536 268435456
net.core.default_qdisc = fq

I’m using latest HiveMQ MQTT version.


As fine tuning OS and TCP configuration did not help, I started reading Netty framework as its being used under the hood tried to play on some NETTY JVM args as shown below.

NETTY_ARGS="-Dio.netty.eventLoopThreads=128 -Dio.netty.threadLocalMap.stringBuilder.initialSize=1024 -Dio.netty.threadLocalMap.stringBuilder.maxSize=4069 -Dio.netty.allocator.tinyCacheSize=5120 -Dio.netty.allocator.smallCacheSize=2560 -Dio.netty.allocator.normalCacheSize=640 -Dio.netty.allocator.maxCachedBufferCapacity=536870912 -Dio.netty.allocator.maxCachedByteBuffersPerChunk=10230 -Dio.netty.allocator.numHeapArenas=64 -Dio.netty.allocator.numDirectArenas=64"

In the end, im still stuck at 9,000+ client MQTT connections, and not able to reach more than that. I’m having difficulty on how to proceed now, wondering which config knob i have missed.

Any advice is greatly appreciated!.

Hi @jowel,

my guess would be the port range (as I’m also running in this issue in large scale tests sometimes).

Can you add a second TCP listener to the HiveMQ config and let those 10k clients connect to both listeners? If this works then the port range is the limiting factor.

You can check this property in “/etc/sysctl.conf” to see how many ephemeral ports can be used:

Michael from the HiveMQ team

Hi @michael_w,

Thanks for reply. Got busy with other things. I have already set it previously more than 10k. net.ipv4.ip_local_port_range = 1024 65500

Wanted to have a quick check if the default netty config inside hivemqtt client is not the culprit, meaning it can support more than 10K client connections, so that i will look at other things outside the VM such us firewalls.


Hi @jowel,

did you set the port range for the client instance (that is where you need the big port range)?

Did you try it out?

As we are using the hivmq-mqtt-client in HiveMQ Swarm I can tell you that 10k are easily possible and so far I can see in the code we use the default config (I’ll check is this is correct → yep, confirmed).

Michael from the HiveMQ team