I’m doing testing and getting things setup for production using a docker container, and on the dashboard under Statistics per Cluster Node I see my disk space is in the red at 33.53 GB / 35.48 GB. I can’t figure out what is using all this disk space in the container. Ideas please.
Hi @wpmccormick,
Greetings and welcome to the HiveMQ Community! It’s fantastic to have you here, especially as you dive into MQTT and the HiveMQ broker. We’re always eager to support new users like you.
The disk usage you’re observing could be due to data persistence, logs, or other files generated within the container. You can inspect this further by checking disk usage directly inside the container:
-
Access the HiveMQ container shell:
docker exec -it hivemq bash
-
Inspect disk usage in key directories:
du /opt/hivemq --max-depth 1 -h | sort -hr
Here’s what the directories typically contain:
/opt/hivemq/data
: Stores persistent data, such as retained messages and session information./opt/hivemq/log
: Stores log files, which can grow significantly depending on usage and log levels./opt/hivemq/conf
: Contains configuration files, which are usually small in size.
-
Exit the container when you’re done:
exit
Clearing Retained Messages
If retained messages are taking up disk space during testing, you can clear them using an mqtt client. For example:
mosquitto_pub -h <broker-host> -p <port> -t "<topic>" -r -n
Here’s what the parameters mean:
<broker-host>
: Replace with your HiveMQ broker’s hostname or IP address.<port>
: Replace with the broker’s MQTT port (default is usually 1883 or 8883 for SSL).<topic>
: Specify the topic whose retained message you want to clear.-r -n
: These flags indicate you’re sending a retained message with a null payload, effectively clearing it.
If you need to clear retained messages across multiple topics, you can write a script to automate this or use a wildcard in your MQTT client to identify affected topics.
Feel free to share your findings or let us know if you need further guidance!
Best regards,
Dasha from The HiveMQ Support Team
I had actually covered all that but didn’t mention so as not to muddy the waters.
[opc@bm-instance services]$ docker exec -it ludicrous-building-1-broker-1 /bin/bash
I have no name!@8515d76568e6:~$ pwd; df / -h;du . --max-depth 1 -h | sort -hr
/opt/hivemq
Filesystem Size Used Avail Use% Mounted on
overlay 36G 34G 1.9G 95% /
du: cannot read directory './extensions/hivemq-enterprise-security-extension': Permission denied
594M .
343M ./extensions
202M ./bin
30M ./tools
20M ./data
184K ./third-party-licenses
176K ./conf
12K ./log
4.0K ./audit
0 ./license
0 ./.cache
0 ./backup
I have no name!@8515d76568e6:~$
exit
[opc@bm-instance services]$ du -h -s building/hivemq-primary/extensions/
68M building/hivemq-primary/extensions/
I have had nothing publishing yet to this, so I can’t imagine there are any messages that have been retained at this early stage.
The mystery continues.
Any other ideas @Daria_H ? I’m really stumped.
Hi @wpmccormick
Based on the information provided, the disk usage doesn’t seem to be coming from HiveMQ itself directly. Here’s why:
- The
du
command shows that the largest directories inside the HiveMQ container are the extensions (343MB), bin (202MB), and data (20MB). These are all typical and expected sizes, especially if you have some extensions installed and the broker is running with default configurations. - The log directory shows very little usage (12KB), which indicates that logging is not the cause of the space issue.
- The persistent data (
/opt/hivemq/data
) is relatively small (20MB), so it likely isn’t contributing significantly to the disk usage either.
It seems more likely that the disk usage might be coming from Docker itself, such as:
-
Docker Image Layers: Sometimes Docker accumulates image layers or container layers, especially if you’ve been rebuilding or updating containers. These layers can consume significant disk space.
-
Docker Volumes or Caches: Unused volumes, networks, or caches could be consuming space outside of the container itself.
Actions to investigate further:
-
Check Docker Disk Usage: Use
docker system df
to check overall Docker disk usage, which will give you an idea of whether the disk usage is due to images, containers, or volumes. -
Prune Docker: If you haven’t already, running
docker system prune -af
can help clean up unused containers, images, and volumes, freeing up space.
Let me know what you find!
Best,
Dasha from The HiveMQ Team
Yea I had already started looking at the docker side of things. I feel like I’m coming up empty there. I think it might me more of an issue on the VM:
[opc@bm-instance ludicrous-building-1]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 126G 0 126G 0% /dev
tmpfs 126G 84K 126G 1% /dev/shm
tmpfs 126G 2.3G 124G 2% /run
tmpfs 126G 0 126G 0% /sys/fs/cgroup
/dev/mapper/ocivolume-root 36G 34G 1.9G 95% /
/dev/sda2 1014M 847M 168M 84% /boot
/dev/mapper/ocivolume-oled 10G 2.8G 7.3G 28% /var/oled
/dev/sda1 100M 6.0M 94M 6% /boot/efi
tmpfs 26G 0 26G 0% /run/user/986
tmpfs 26G 0 26G 0% /run/user/1000
See the /dev/mapper/ocivolume-root line. But still, based on what docker says it’s using here, I’m still not seeing it.
[opc@bm-instance ludicrous-building-1]$ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 6 4 8.783GB 2.275GB (25%)
Containers 6 6 7.97GB 0B (0%)
Local Volumes 12 4 128.7MB 103.2MB (80%)
Build Cache 0 0 0B 0B
And I’ve run this a couple times now …
[opc@bm-instance ludicrous-building-1]$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- unused build cache
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0B
[
The 1st time it did clean some things up.
So IDK if it helps, but this is a cluster. Here’s the full config:
<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="config.xsd">
<listeners>
<tcp-listener>
<port>1883</port>
<bind-address>0.0.0.0</bind-address>
</tcp-listener>
<tls-tcp-listener>
<port>8883</port>
<bind-address>0.0.0.0</bind-address>
<tls>
<keystore>
<path>hivemq.jks</path>
<password>**********</password>
<private-key-password>**********</private-key-password>
</keystore>
<client-authentication-mode>NONE</client-authentication-mode>
</tls>
</tls-tcp-listener>
</listeners>
<cluster>
<enabled>true</enabled>
<transport>
<tcp>
<bind-address>0.0.0.0</bind-address>
<bind-port>7800</bind-port>
</tcp>
</transport>
<discovery>
<static>
<node>
<host>broker</host>
<port>7800</port>
</node>
<node>
<host>broker-secondary</host>
<port>7801</port>
</node>
</static>
</discovery>
</cluster>
<control-center>
<listeners>
<http>
<port>8080</port>
<bind-address>0.0.0.0</bind-address>
</http>
</listeners>
</control-center>
</hivemq>
And here’s the part of the compose file:
broker:
image: hivemq/hivemq4
ports:
- 1883:1883
- 8883:8883
- 8080:8080
- 7800:7800
volumes:
- ./services/building/hivemq-primary/config.xml:/opt/hivemq/conf/config.xml
- ./services/building/hivemq-primary/hivemq.jks:/opt/hivemq/hivemq.jks
- ./services/building/hivemq-primary/hivemq-server.crt:/opt/hivemq/hivemq-server.crt
broker-secondary:
image: hivemq/hivemq4
ports:
- 7801:7801
volumes:
- ./services/building/hivemq-secondary/config.xml:/opt/hivemq/conf/config.xml
I backed out the ESE for now.
EDIT: Since we all agree that HiveMQ is not using this drive space, why is it reporting it as being used?
Hi @wpmccormick, welcome to the community!
The disk space issue you’re seeing in the HiveMQ Dashboard likely reflects the total available disk space on the host system (or volume mounted to the container), rather than just what’s used by HiveMQ itself.
A few things you can check:
-
Inspect Disk Usage in the Container – Run:
docker exec -it <your-hivemq-container-id> du -sh /opt/hivemq/data
This will show how much space HiveMQ data is consuming.
-
Check Docker Volume Usage – If you’re using volumes, run:
docker system df -v
This helps identify where disk space is being used.
-
Inspect Logs & Persistence – If you have persistence enabled, old retained messages and logs might be taking space. You can check the logs at
/opt/hivemq/log
and prune old messages if needed. -
Check Host Disk Usage – Run
df -h
on your host system to confirm if the overall disk is filling up.
If you’re still unsure, sharing details about your container setup (Docker Compose or run command) and storage configuration would help us assist you better.
Hope this helps! Let us know what you find.
So this was indeed a host disk space issue.