Hive-MQ 1.2.0 version Issues

Hi,

We are using 1.2.0 version of hivemq-mqtt-client and JDK 8
In one of the deployments, where load is too high, we observed, our client is loosing a session with MQTT broker (vernemq - in our case)

Is there a known Issue on 1.2.0 version on "hivemq-mqtt-client " about this same?
If so, could you please recommend to upgrade to which version?

Kind Regards,

Hi, @Sameerti, welcome to the HiveMQ Community! We’re excited to have you join us, especially because of your interest in MQTT and the HiveMQ broker. It’s great to see new users like you.

Thank you for reaching out with your query. I understand that you are facing issues with the 1.2.0 version of the HiveMQ MQTT client while using JDK 8, particularly under high load, where it appears that your client is losing its session with the MQTT broker (VerneMQ in this case).

From your description, it’s not entirely clear what you mean by “client is losing a session with the MQTT broker.” If you could provide more specific details about the behavior you’re observing, such as error messages, logs, or conditions under which the issue occurs, it would help us better understand and address the problem.

In the meantime, I recommend upgrading to the latest version of the HiveMQ MQTT client, which is 1.3.3, and using JDK 11. You can find the release details and download it from here.

Kind regards,
Dasha from the HiveMQ Team

09:47:02 (start)
/opt/mqtt-broker/traces/test.1:<9427.24756.3474> MQTT RECV: CID: “controller-prd-dc-4:1873-5.0” CONNECT(c: controller-prd-dc-4:1873-5.0, v: 5, u: XYZ, p: XYZ, cs: false, ka: 60)

10:26
/opt/mqtt-broker/traces/test.285:<9427.3141.6900> MQTT RECV: CID: “controller-prd-dc-4:1873-5.0” CONNECT(c: controller-prd-dc-4:1873-5.0, v: 5, u: XYZ, p: XYZ, cs: false, ka: 60)

10:26
/opt/mqtt-broker/traces/test.286:<9427.30760.6202> MQTT RECV: CID: “controller-prd-dc-4:1873-5.0” CONNECT(c: controller-prd-dc-4:1873-5.0, v: 5, u: XYZ, p: XYZ, cs: false, ka: 60)
/opt/mqtt-broker/traces/test.286:<9427.30760.6202> MQTT RECV: CID: “controller-prd-dc-4:1873-5.0” DISCONNECT(rc:receive_max_exceeded(147))

10:26
/opt/mqtt-broker/traces/test.287:<9427.23983.3578> MQTT RECV: CID: “controller-prd-dc-4:1873-5.0” CONNECT(c: controller-prd-dc-4:1873-5.0, v: 5, u: XYZ, p: XYZ, cs: false, ka: 60)

10:26
/opt/mqtt-broker/traces/test.288:<9427.23983.3578> MQTT RECV: CID: “controller-prd-dc-4:1873-5.0” DISCONNECT(rc:receive_max_exceeded(147))
/opt/mqtt-broker/traces/test.288:<9427.1800.3592> MQTT RECV: CID: “controller-prd-dc-4:1873-5.0” CONNECT(c: controller-prd-dc-4:1873-5.0, v: 5, u: XYZ, p: XYZ, cs: false, ka: 60)
/opt/mqtt-broker/traces/test.288:<9427.1800.3592> MQTT RECV: CID: “controller-prd-dc-4:1873-5.0” DISCONNECT(rc:receive_max_exceeded(147))

Hi Dasha, Thanks for prompt response.

I have pasted logs from vernemq broker side on Connect/ Disconnect session(s)

Please check, do you think, upgrading client version could help?
JDK upgrade is not an option for us.

So which upgraded hivemq-client version would be compatible with jdk8 and yet help us with this issue?

Also could you please share an Issue-Id, support case, where similar issue was posted on 1.2.0 or prior version(s) and upgrading helped.

Crash.log from Broker side

2024-08-06 14:17:11 =CRASH REPORT====
crasher:
initial call: vmq_ranch:init/4
pid: <0.31523.188>
registered_name:
exception exit: {{timeout,{gen_server,call,[vmq_tracer,{start_session_trace,<0.31523.188>,{mqtt5_connect,5,<<“XYZ”>>,<<“XYZ”>>,false,60,<<“controller-prd-dc-4:1873-5.0”>>,undefined,#{p_request_response_info => true}}}]}},[{gen_server,call,2,[{file,“gen_server.erl”},{line,215}]},{vmq_mqtt5_fsm,init,3,[{file,“/opt/vernemq/apps/vmq_server/src/vmq_mqtt5_fsm.erl”},{line,149}]},{vmq_mqtt_pre_init,data_in,2,[{file,“/opt/vernemq/apps/vmq_server/src/vmq_mqtt_pre_init.erl”},{line,58}]},{vmq_ranch,handle_message,2,[{file,“/opt/vernemq/apps/vmq_server/src/vmq_ranch.erl”},{line,195}]},{vmq_ranch,loop_,1,[{file,“/opt/vernemq/apps/vmq_server/src/vmq_ranch.erl”},{line,138}]},{proc_lib,init_p_do_apply,3,[{file,“proc_lib.erl”},{line,249}]}]}
ancestors: [<0.392.0>,<0.391.0>,ranch_sup,<0.144.0>]
message_queue_len: 0
messages:
links: [<0.392.0>,#Port<0.134765461>]
dictionary: [{atomics_ref,#Ref<0.2630845199.1241907206.11371>},{rand_seed,{#{jump => #Fun<rand.3.8986388>,max => 288230376151711743,next => #Fun<rand.2.8986388>,type => exsplus},[140093043198255928|74388157429079932]}}]
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 850
neighbours:
2024-08-06 14:17:11 =ERROR REPORT====
Ranch listener {{10,31,155,240},1883} had connection process started with vmq_ranch:start_link/4 at <0.31523.188> exit with reason: {timeout,{gen_server,call,[vmq_tracer,{start_session_trace,<0.31523.188>,{mqtt5_connect,5,<<“XYZ”>>,<<“XYZ”>>,false,60,<<“controller-prd-dc-4:1873-5.0”>>,undefined,#{p_request_response_info => true}}}]}}
2024-08-06 14:17:16 =CRASH REPORT====
crasher:
initial call: vmq_ranch:init/4
pid: <0.31534.188>
registered_name:
exception exit: {{normal,{gen_server,call,[vmq_tracer,{start_session_trace,<0.31534.188>,{mqtt5_connect,5,<<“XYZ”>>,<<“XYZ”>>,false,60,<<“controller-prd-dc-4:1873-5.0”>>,undefined,#{p_request_response_info => true}}}]}},[{gen_server,call,2,[{file,“gen_server.erl”},{line,215}]},{vmq_mqtt5_fsm,init,3,[{file,“/opt/vernemq/apps/vmq_server/src/vmq_mqtt5_fsm.erl”},{line,149}]},{vmq_mqtt_pre_init,data_in,2,[{file,“/opt/vernemq/apps/vmq_server/src/vmq_mqtt_pre_init.erl”},{line,58}]},{vmq_ranch,handle_message,2,[{file,“/opt/vernemq/apps/vmq_server/src/vmq_ranch.erl”},{line,195}]},{vmq_ranch,loop_,1,[{file,“/opt/vernemq/apps/vmq_server/src/vmq_ranch.erl”},{line,138}]},{proc_lib,init_p_do_apply,3,[{file,“proc_lib.erl”},{line,249}]}]}
ancestors: [<0.392.0>,<0.391.0>,ranch_sup,<0.144.0>]
message_queue_len: 0
messages:
links: [<0.392.0>,#Port<0.134765460>]
dictionary: [{atomics_ref,#Ref<0.2630845199.1241907206.11371>},{rand_seed,{#{jump => #Fun<rand.3.8986388>,max => 288230376151711743,next => #Fun<rand.2.8986388>,type => exsplus},[140093073263027583|3809080468938461]}}]
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 846
neighbours:
2024-08-06 14:17:16 =ERROR REPORT====
Ranch listener {{10,31,155,240},1883} had connection process started with vmq_ranch:start_link/4 at <0.31534.188> exit with reason: {normal,{gen_server,call,[vmq_tracer,{start_session_trace,<0.31534.188>,{mqtt5_connect,5,<<“XYZ”>>,<<“XYZ”>>,false,60,<<“controller-prd-dc-4:1873-5.0”>>,undefined,#{p_request_response_info => true}}}]}}

Hi,

I can not upgrade the JDK i.e. JDK has to be 8.

Which hivemq-client version we can upgrade to, that’s still compatible with JDK8, Reiterating, currently we are using 1.2.0

XYZ is the client connecting to vernemq broker, using hivemq 1.2.0.

P F A from broker side, with which session is lost under load.

Regards,

(Attachment error.log is missing)

(Attachment crash.log is missing)

(Attachment console.log is missing)

Hi @Sameerti

Indicates that the client sends more unacknowledged messages than what the Server Receive Maximum permits, the broker sends DISCONNECT with Reason Code 0x93 (decimal 147) (Receive Maximum exceeded).

When a client connects to the broker, the broker responds with a value for “Receive Maximum” in the CONNACK packet. The client should respect this value.

More on the server receive maximum: MQTT Flow Control – MQTT 5 Essentials Part 12

I hope this helps,
Best
Dasha from the HiveMQ Team

Hi Dasha,

Thanks again.

We think, com.hivemq.client.mqtt.datatypes.MqttSharedTopicFilter can help us.

I tried to use it like below and it produces my topic name as: $share/group/test/
mqtt5AsyncClient.subscribeWith().topicFilter(MqttSharedTopicFilter.of(“group”,topic)).qos …

However, my broker i.e. vernemq wants sub topic with prefix “\” i.e.: \$share/group/test/
How could we add “\”?

Regards,
Sameer

Hi Dasha,

Could you please update on above query, It’s very critical for us.

Regards

Hi Sameer,

Thank you for reaching out, and it’s great to hear you’re exploring the use of MqttSharedTopicFilter with HiveMQ.

While HiveMQ MQTT Broker does indeed support shared subscriptions, I understand there might be some differences in how VerneMQ expects the topic filter format. If possible, could you please share a reference to VerneMQ’s documentation on this?

Additionally, it would be helpful to understand more about the issue you’re encountering when using \$share/group/test/ with the HiveMQ MQTT client. Are you experiencing an error, exception, or unexpected behavior in your shared subscription? Your details will help us assist you more effectively.

Looking forward to your insights.

Best regards,
Dasha from the HiveMQ Team

Hi Dasha,

Here’s description on vernemq: Shared subscriptions | VerneMQ

And about the issue I’am encountering when using \$share/group/test/ with the HiveMQ MQTT client is, message seems to be not getting delivered to all subscribers

I kept three subscribers, one is my application (which is using hivemq-mqtt-client)
other two are standalone mosquitto_sub.

mosquitto_sub gets messages fine when subscribed to topic: \$share/group/blah/ep/cnt/notifications

My application using this code:

mqtt5AsyncClient.subscribeWith().topicFilter(MqttSharedTopicFilter.of("group",topic)).qos(MqttQos.AT_LEAST_ONCE).noLocal(false).callback

Above code gets subscribed to topic like this:
$share/group/blah/ep/cnt/notifications
So please observe that missing \ on front.
How could I add that with hivemq client?

Hi Sameerti,

Thank you for reaching out with the details of your issue. I understand how frustrating it can be when things don’t work as expected, especially when it comes to ensuring that all subscribers receive messages as intended.

Regarding your concern, you actually don’t need to add the slash symbol (\) in the Java client. The shared subscription filter should indeed start with the dollar symbol, like this: $share/group/blah/ep/cnt/notifications.

In the VerneMQ documentation you referenced, the \$ is specifically used in the terminal to prevent bash from interpreting it as a variable, which is why it might look different there. This shouldn’t affect how you use the shared subscription filter in your Java client.

If your application is having trouble subscribing while Mosquitto works fine, it might be worth adding some debugging output of the topic filter before your client subscribes and some debugging output of the SubAck result code after the client subscribes. This can help confirm exactly what the client is attempting to subscribe to and may provide further insights.

Best regards,
Dasha from the HiveMQ Team