Disk Space Filling Up - Retained Messages

I’m running HiveMQ CE on a CentOS 7 machine for a SaaS business with >15k connected MQTT devices and am noticing that the hard drive is creeping up to 100% usage. I have ruled out typical file bloat sources (/var/log, etc) and have proper log rotation implemented.

I discovered a ton of large log files under the /data/persistence/retained_messages/ directory; it seems that HiveMQ is doing a dump of some sort of statistics, as shown in the quote below.

My MQTT configuration is shown below; how can I disable the generation of these log files, or at least clean them out with logrotate?

    <max-client-id-length>100</max-client-id-length>
    <retry-interval>0</retry-interval>
    <no-connect-packet-idle-timeout-millis>120000</no-connect-packet-idle-timeout-millis>
    <max-queued-messages>100</max-queued-messages>
    <retained-publish-ttl>60</retained-publish-ttl>
    <publish-ttl>1800</publish-ttl>
    <client-session-ttl>240</client-session-ttl>

Example Log File contents:

Uptime(secs): 82200.2 total, 0.0 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] **
2020/04/28-19:30:07.714343 7f14e4b1d700 [_impl/db_impl.cc:638] STATISTICS:
rocksdb.block.cache.miss COUNT : 280
rocksdb.block.cache.hit COUNT : 204
rocksdb.block.cache.add COUNT : 179
rocksdb.block.cache.add.failures COUNT : 0
rocksdb.block.cache.index.miss COUNT : 0
rocksdb.block.cache.index.hit COUNT : 0
rocksdb.block.cache.index.add COUNT : 0
rocksdb.block.cache.index.bytes.insert COUNT : 0

Hi @ryantaylortnp,

can you please provide the version of the HiveMQ CE?

The configuration you provided is only valid for HiveMQ 3, you can find the MQTT configuration for HiveMQ CE here.

Or just use this (note retry-interval and retained-publish-ttl don’t exist in HiveMQ CE and HiveMQ 4):

<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
   ...
    <restrictions>
        <max-client-id-length>100</max-client-id-length>
        <no-connect-idle-timeout>120000</no-connect-idle-timeout>
    </restrictions>

    <mqtt>
        <message-expiry>
            <max-interval>1800</max-interval>
        </message-expiry>
         <session-expiry>
            <max-interval>240</max-interval>
        </session-expiry>
        <queued-messages>
            <max-queue-size>100</max-queue-size>
        </queued-messages>
    </mqtt>
    ....
</hivemq>

It would also be good to provide high level information about your scenario, especially how many retained messages (topic structure, qos level, payloadsize) do you have?

P.S.: As a workaround for “retained-publish-ttl” a PublishInboundInterceptor to change the expiry for a retained message can be used.

Greetings,
Michael from the HiveMQ team

Michael,

Thanks, it does look like configuration syntax might be the issue here. I’m running version 2020.3.

All of our connected devices are MQTT 3.0 clients connecting with a clean session. They’re sending small payloads (typically ~100 byte payloads, up to 1K in some cases). They are Publishing with a QoS of 0, and typically send data to topics that are segmented by our client (100-500 devices per topic). There is one master topic that accepts heartbeats from all connected devices, approximately 2 msg/min per device (<20k devices total, so around 50-70 messages per second).

Please correct me if I’m wrong, but I understand that MQTT 3.0 clients can’t specify a retained message publish TTL, so the PublishInboundInterceptor should not be needed?

Hi @ryantaylortnp,

do I understand correctly that you are not using retained messages?

Please correct me if I’m wrong, but I understand that MQTT 3.0 clients can’t specify a retained message publish TTL, so the PublishInboundInterceptor should not be needed?

HiveMQ CE treats all packets internally as MQTT 5 packets, this means you can set a message expiry for any incoming MQTT 3 Publish packet.

Here is an example that would add an expiry to all incoming retained messages:

public class SetExpiryForRetainedMessagesExtensionMain implements ExtensionMain {
    @Override
    public void extensionStart(@NotNull ExtensionStartInput extensionStartInput, @NotNull ExtensionStartOutput extensionStartOutput) {
        Services.initializerRegistry().setClientInitializer(new ClientInitializer() {
            @Override
            public void initialize(@NotNull InitializerInput initializerInput, @NotNull ClientContext clientContext) {
                clientContext.addPublishInboundInterceptor(new PublishInboundInterceptor() {
                    @Override
                    public void onInboundPublish(@NotNull PublishInboundInput publishInboundInput, @NotNull PublishInboundOutput publishInboundOutput) {
                        if(publishInboundInput.getPublishPacket().getRetain()) {
                            publishInboundOutput.getPublishPacket().setMessageExpiryInterval(180);
                        }
                    }
                });
            }
        });
    }

    @Override
    public void extensionStop(@NotNull ExtensionStopInput extensionStopInput, @NotNull ExtensionStopOutput extensionStopOutput) {

    }
}

If you are not familiar with creating an extension or the extension system this guide should help you.

Greetings,
Michael

1 Like

do I understand correctly that you are not using retained messages?

Correct. Our data is considered stale very quickly, so I’m intending for HiveMQ to not persist any messages.

We don’t have full control over the firmware on all of our connected devices, so it sounds like the interceptor extension would be best for us to guarantee the behavior of the system. Thanks for the reference example, very helpful.

Hello,
Did you solve your problem? I met the same problem as you, data on /hivemq/data/persistence folder is very large

Hi @iot,

can you verify if your big persistence folder is linked to this issue?

Kind regards,
Michael from the HiveMQ team