How to prevent “Qos 0 channel not writable” from occuring ?
If this means that the client(s) subscribed to the QoS 0 messages topic(s) is(are) not consuming those messages fast enough. How can we make it to a way as clients are consuming faster?
@michael_w Is there any option to make the single subscriber consume messages faster?
In my case, I’m using Kafka extension as a client to consume messages from HiveMQ topics and pass it to Kafka. So how can I distribute the message throughput over multiple clients ? Is there any specific config for that?
Can you share your event.log please? Because “Qos 0 channel not writable” can only happen for MQTT clients. So it can’t be the Kafka extension, especially as Kafka records don’t have a QoS level.
The line where you saw dropped message because of “Qos 0 channel not writable” should suffice.
You find the event.log in the {hivemq_home}/hivemq/log folder.
You have an MQTT client that is the reason for dropped messages. And for that I already provided the solution above.
Is there any option to make the single subscriber consume messages faster?
This is something you have to find out, as this is dependent on the client you implemented/use. A general tip would be to avoid consuming messages in a blocking way.
So are you saying kafka extension has nothing to do with MQTT client and it just passes the messages from MQTT publisher to hivemq broker topics and then to Kafka. Inorder to fetch those messages, we use MQTT client which subscribes the messages from the topics, it gets dropped?
I have one quick question over here, will this affects the data being sent to Kafka ? Also we are using 4.5.0 version, can dropped messages, expired and dead messages be configured in the above hivemq version?
I just realized something, when you speak of the “kafka extension” you mean the HiveMQ Kafka extension right? My answers below are given under the assumption that this is the case.
So are you saying kafka extension has nothing to do with MQTT client and it just passes the messages from MQTT publisher to hivemq broker topics and then to Kafka
Exactly.
Inorder to fetch those messages, we use MQTT client which subscribes the messages from the topics, it gets dropped?
I don’t understand this, you don’t need any MQTT client for the kafka extension. You seem to have a client that subscribes to a topic that gets a lot of messages (maybe “topic/#”?) and the client can’t handle that message throughput.
I have one quick question over here, will this affects the data being sent to Kafka ?
Doesn’t affect the HiveMQ kafka extension.
Also we are using 4.5.0 version, can dropped messages, expired and dead messages be configured in the above hivemq version?
Dropped/expired → yes, was introduced with 4.4
Dead → no, was introduced with 4.6
Side note: We have 4.9 out (new LTS version) so you might want to upgrade to it. Also we sunset our previous LTS version 4.5 on 31st of January, 2023.