AWS hivemq scale

hi michael,
after changing the propoties file I think everything works well now, they found each other

2022-10-17 01:48:37,637 INFO  - Cluster size = 2, members : [dspm6, e69ds].
2022-10-17 01:48:37,638 DEBUG - Sending RUNNING state notification for dspm6 to all nodes.
2022-10-17 01:48:37,640 DEBUG - Removing redundant persistence entries after join
2022-10-17 01:48:37,658 DEBUG - Removed redundant persistence entries after join in 17ms
2022-10-17 01:48:39,616 DEBUG - Reading properties file '/opt/hivemq/extensions/hivemq-s3-cluster-discovery-extension/s3discovery.properties'.
2022-10-17 01:48:39,917 DEBUG - Found following node addresses with the S3 extension: [172.31.44.31:7800, 172.31.47.49:7800]

Hi Michael
Sorry for the flooding messages, just got some questions related to the test.

  1. I’ve successfully connected to the load-balancer DNS name by using the node-red service, Just want to know how can I assess the HiveMQ control center? using DNSname:8080? that doesn’t work

  2. for the published topics, in the s3 bucket, am I able to see the data that was published? each publish message was a set of timestamps and messages, for now, I can only see them in the node-red debug window.

  3. for the load balancer, will the instance scale up or scale down? if not, how can I make them scale up and scale down depending on the requirements?

looking forward to your reply

Best Regards,
Jeffrey Lin

Hi @jeffrey,

I was wondering what you meant with hivemq-s3-cluster-discovery-extension.xml as the S3 extension only had s3discovery.properties. Seems the blog post had outdated information, this will be fixed.

I’ve successfully connected to the load-balancer DNS name by using the node-red service, Just want to know how can I assess the HiveMQ control center? using DNSname:8080? that doesn’t work

You also need to add the port of the Control Center to the Load Balancer. My guess is your Load Balancer has only this set of listener:

You click “Edit” and add a listener for your Control Center (according to config.xml port is 8080):

Make sure you have sticky session enabled, you can do this in the “Description” menu of the Load Balacner:

for the published topics, in the s3 bucket, am I able to see the data that was published? each publish message was a set of timestamps and messages, for now, I can only see them in the node-red debug window.

The s3 extension is only used so that the HiveMQ instances can discover each other and form a cluster. HiveMQ will not do anything else with the s3 bucket, hence the name hivemq-s3-cluster-discovery-extension.

for the load balancer, will the instance scale up or scale down? if not, how can I make them scale up and scale down depending on the requirements?

So as the blog post mentions at the end, currently you would have to do this manually by creating (+configuring)/deleting instances and adding/removing them in the “Target Group” so you can scale up/down.

For production environments it’s recommended to use automatic provisioning of the EC2 instances (e.g. by using Chef, Puppet, Ansible or similar tools) so you don’t need to configure each EC2 instance manually. Of course HiveMQ can also be used with Docker, which can also ease the provisioning of HiveMQ nodes.

Greetings,
Michael

Hi Michael
there’s no http option in my load balancer

Hi @jeffrey,

Found the issue, I used an existing “Classic” load balancer(LB) to show you how to create a HTTP listener, whereas you have an “Network Load Balancer”. So I’m not sure what Amazon recommends as best practice in the case you need a TCP and HTTP listener. The options I see:

  1. Use an classic load balancer and add listener for both ports (MQTT + Control Center)
  2. Use an application load balancer to expose the Control Center and a network load balancer to expose MQTT

If you currently are only testing out HiveMQ I would say delete the Network LB and create an Classic LB instead and add the TCP + HTTP listener to it.

Note 1: This one you can try out before doing anything else, maybe it is enough to add a TCP_UDP listener that forwards on 8080 to 8080 (don’t forget to enable sticky session).

Note 2: The Classic Load Balancer is a folded menu in “Select load balancer type”

Hope this helps,
Michael

for the initial load balancer


this is the new classic LB but when I try to use DNS name:8080 to access the control centre that doesn’t works

I don’t see more config options and it works here, so let’s debug why it doesn’t work for you:

  • You added the instances as target group to the Classic Load Balancer? You should see them in the “Instances” menu from the Load Balancer
  • Does the MQTT forwarding still work?
  • 8080 is your configured port for the Control Center right?


hi Michael, in the article they didn’t configure the port 8080

in the config.xml file, I see the control center listeners on port 8080, so I think there’s something wrong in the article

sorry again for the flooding message, the community doesn’t allow me to reply with multiple images, when I try to change the security group inbound rule, if I change the port to 8080, it’s still TCP protocol, if I change the type to “HTTP”, the protocol still TCP and I cannot change the port range to 8080, it’s fix to port 80

Yes this makes sense as the article only describes how to create a load balancer for the MQTT traffic, the Control Center isn’t covered in the article.

I see we use Custom TCP in the inbound rule for the Control Center:


Might want to try that.

Greetings,
Michael

Perfect Michael, much appreciate your patient reply which really helps a lot! now the control center is working!
BTW, I’m curious what is the maximum request these two cluster nodes can handle? in other words, How many publishers/subscribers can those two nodes handle while maintaining the integrity and speed of the transmitted message?

I’m glad it finally works :partying_face:

BTW, I’m curious what is the maximum request these two cluster nodes can handle? in other words, How many publishers/subscribers can those two nodes handle while maintaining the integrity and speed of the transmitted message?

Ah I “hate” this question! Because it’s always hard to answer because it most often depends on the use case and on the hardware (basically this is what our customer success team is for and not the community forum).
At least I can say: It shouldn’t be a problem to have 6 figures amounts of clients connected simultaneously.

Greetings,
Michael

hahah Got it! Thanks Michael, really appreciate it!!