QoS 1/2 - Backpressure in full subscriber queues instead of discard // queued-messages-strategy

Hello,

I’m wondering if the following use case is somehow configurable/achievable in HiveMQ:

Sender Topic Subscriber

For QoS1/2 messages, if a subscriber queue is full, instead of discarding messages, to not send a PUBACK to the client, to signal slow down/back pressure and retry. By default the client get’s still a PUBACK, since - as correctly per MQTT spec - it only applies for successful delivery to the broker, not into all subscriber queues.

I could find only the option to specify a discard policy for full queues (queued-messages-strategy), not an option to apply back pressure, but since it was configurable, I’m wondering if there are - via enterprise edition or plugins - any efforts to have more flexible policies?

Thanks, elm

Hello @elm ,

First off, welcome to the HiveMQ Community! We’re always happy to see others interested in MQTT.

With regards to your implementation question, it’s first important to clarify the scope. By changing the nature in which PUBACK/PUBREL or other publish packets function, this is potentially breaking from MQTT specifications, and is typically not recommended unless pursuing a very specific, tested use-case, especially in a production environment.

With that in mind, I don’t believe we currently have an extension on our radar that is intended to push back pressure instead of dropping messages once a queue is full, as typically the solution is to prevent messages from queuing up beyond the maximum queue size, potentially narrowing down wildcard usages or implementing shared subscriptions.

We do have an open extension developer guide, however, that provides some additional insight and access into these MQTT methods, specifically. Based on the details provided, it is likely that you are looking at an interceptor implementation. With those details in mind, I would recommend looking at solutions that do not require modification to the MQTT messaging protocol, though if this is specific to the implementation needs, this will provide details on the implementation of an interceptor to meet this functionality.

Best,
Aaron from the HiveMQ Team

Hi @AaronTLFranz ,

thanks for the quick reply and detailed advise. Makes all sense and fits with what we found out so far.

We know we’re probably using a not so common use case here, it is a fan-in scenario for telemetry and log data, the basic set up is that there is one topic per device for this and a shared subscription to consume the messages (other messages and use cases use different topics, so we only need to isolate this behavior for the logging data). This works, scales and operates as expected in the end.

However, as we’re getting ready for the future with several hundred k devices and a likely failure scenario is that the downstream consumer of the MQTT broker is unavailable, or the destination the consumer writes to. The queues on Broker side will quickly fill up, as the queue in a shared subscriptions is not per device/topic but per subscriber.

So we’re - as far as I can see - down to:

  • accept message loss/drop in case of that
  • build persistence on the broker side with higher queue sizes and a proper backend
  • implement a higher level protocol on top ourselves, that has some kind of flow control to acknowledge end to end transmission or based on monitoring signals the clients, e.g. via an MQTT message, to stop sending logs
  • find a way to use the MQTT protocol to use back pressure (my idea bove)

The latter seemed the most convenient, since each IoT client device can easily buffer their own messages and we can keep infrastructure and operations costs on broker side down regarding buffering and there would be no additional overhead engineering a solution. But I agree, it has the danger of breaking something fundamental beyond the control of us, but thanks for the tips so far, I need to dig into the developer guide and see if there is a reasonable approach.

Thanks again,
Elmar