Hi Michael
Sorry for the flooding messages, just got some questions related to the test.
I’ve successfully connected to the load-balancer DNS name by using the node-red service, Just want to know how can I assess the HiveMQ control center? using DNSname:8080? that doesn’t work
for the published topics, in the s3 bucket, am I able to see the data that was published? each publish message was a set of timestamps and messages, for now, I can only see them in the node-red debug window.
for the load balancer, will the instance scale up or scale down? if not, how can I make them scale up and scale down depending on the requirements?
I was wondering what you meant with hivemq-s3-cluster-discovery-extension.xml as the S3 extension only had s3discovery.properties. Seems the blog post had outdated information, this will be fixed.
I’ve successfully connected to the load-balancer DNS name by using the node-red service, Just want to know how can I assess the HiveMQ control center? using DNSname:8080? that doesn’t work
You also need to add the port of the Control Center to the Load Balancer. My guess is your Load Balancer has only this set of listener:
for the published topics, in the s3 bucket, am I able to see the data that was published? each publish message was a set of timestamps and messages, for now, I can only see them in the node-red debug window.
The s3 extension is only used so that the HiveMQ instances can discover each other and form a cluster. HiveMQ will not do anything else with the s3 bucket, hence the name hivemq-s3-cluster-discovery-extension.
for the load balancer, will the instance scale up or scale down? if not, how can I make them scale up and scale down depending on the requirements?
So as the blog post mentions at the end, currently you would have to do this manually by creating (+configuring)/deleting instances and adding/removing them in the “Target Group” so you can scale up/down.
For production environments it’s recommended to use automatic provisioning of the EC2 instances (e.g. by using Chef, Puppet, Ansible or similar tools) so you don’t need to configure each EC2 instance manually. Of course HiveMQ can also be used with Docker, which can also ease the provisioning of HiveMQ nodes.
Found the issue, I used an existing “Classic” load balancer(LB) to show you how to create a HTTP listener, whereas you have an “Network Load Balancer”. So I’m not sure what Amazon recommends as best practice in the case you need a TCP and HTTP listener. The options I see:
Use an classic load balancer and add listener for both ports (MQTT + Control Center)
Use an application load balancer to expose the Control Center and a network load balancer to expose MQTT
If you currently are only testing out HiveMQ I would say delete the Network LB and create an Classic LB instead and add the TCP + HTTP listener to it.
Note 1: This one you can try out before doing anything else, maybe it is enough to add a TCP_UDP listener that forwards on 8080 to 8080 (don’t forget to enable sticky session).
Note 2: The Classic Load Balancer is a folded menu in “Select load balancer type”
sorry again for the flooding message, the community doesn’t allow me to reply with multiple images, when I try to change the security group inbound rule, if I change the port to 8080, it’s still TCP protocol, if I change the type to “HTTP”, the protocol still TCP and I cannot change the port range to 8080, it’s fix to port 80
Yes this makes sense as the article only describes how to create a load balancer for the MQTT traffic, the Control Center isn’t covered in the article.
I see we use Custom TCP in the inbound rule for the Control Center:
Perfect Michael, much appreciate your patient reply which really helps a lot! now the control center is working!
BTW, I’m curious what is the maximum request these two cluster nodes can handle? in other words, How many publishers/subscribers can those two nodes handle while maintaining the integrity and speed of the transmitted message?
BTW, I’m curious what is the maximum request these two cluster nodes can handle? in other words, How many publishers/subscribers can those two nodes handle while maintaining the integrity and speed of the transmitted message?
Ah I “hate” this question! Because it’s always hard to answer because it most often depends on the use case and on the hardware (basically this is what our customer success team is for and not the community forum).
At least I can say: It shouldn’t be a problem to have 6 figures amounts of clients connected simultaneously.