Helm chart issue

Hello I’m following this guide and I am facing an issue when I try to Adapt the hivemq cluster with any new configurations: when I try to update the number of nodes with the following command: helm upgrade --install hivemq hivemq/hivemq-operator --set hivemq.nodeCount=1 I get

Error: UPGRADE FAILED: cannot patch "hivemq" with kind HiveMQCluster: Internal error occurred: failed calling webhook "hivemq-cluster-policy.hivemq.com": Post "https://hivemq-operator-operator.default.svc:443/api/v1/validate/hivemq-clusters?timeout=30s": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0

this is my current situation:

  • I am using minikube as local k8s cluster
  • the command kubectl get hivemq-clusters returns
NAME     SIZE   IMAGE            VERSION     STATUS   ENDPOINT   MESSAGE
hivemq   3      hivemq/hivemq4   k8s-4.4.2

(strangely the status is not appearing)

  • all the operator components seems to be running (checked both with minikube dashboard and with kubectl commandline

Any idea on how to fix that?
Thanks for your time :slight_smile:

Hey,

sorry to hear you’re running into issues.
We are aware of a few minor problems with the TLS configuration of the validation hook in the operator and we’re going to fix them with an upcoming release.

In the meantime you can simply disable the hook by also specifying --set operator.admissionHookEnabled=false in your helm upgrade command.
Note that you will also have to first run a kubectl delete validatingwebhookconfigurations to delete the existing webhook configuration, so the subsequent helm upgrade command will be able to patch the cluster resource.

Best regards,
Simon

2 Likes

hey Simon,
thanks a lot for the fast reply!
I disable admissionHookEnabled like You proposed, but I’m still having some issues on having a successful started cluster (even after 15m waiting):

❯ kubectl get hivemq-clusters
NAME     SIZE   IMAGE            VERSION     STATUS     ENDPOINT   MESSAGE
hivemq   1      hivemq/hivemq4   k8s-4.4.2   Updating              Waiting for deployment to become ready, ready: 0/1

looking at the pods I have this pending from several minutes:

❯ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
hivemq-6bbf5ff6b9-vswll                            0/1     Pending   0          12m
hivemq-hivemq-operator-operator-69b6ff6f97-mfpgc   1/1     Running   0          3m24s

and trying to look at logs, the output looks empty:

❯ kubectl logs hivemq-6bbf5ff6b9-vswll
❯

Do You have any suggestion on how to proceed on?
Thanks
S.

Also trying to give more context I enhanced the log levels of the operator and the hivemq broker to DEBUG:

❯ helm upgrade --install hivemq hivemq/hivemq-operator --set hivemq.logLevel=DEBUG --set operator.admissionHookEnabled=false --set hivemq.nodeCount=1 --set operator.logLevel=DEBUG

And checking the logs on the other running pods I have this output:

❯ kubectl get hivemq-clusters
NAME     SIZE   IMAGE            VERSION     STATUS     ENDPOINT   MESSAGE
hivemq   1      hivemq/hivemq4   k8s-4.4.2   Updating              Waiting for deployment to become ready, ready: 0/1
❯ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
hivemq-6bbf5ff6b9-lhqz8                            0/1     Pending   0          10s
hivemq-hivemq-operator-operator-674cf799c5-zmsdl   1/1     Running   0          29s
❯ kubectl logs hivemq-hivemq-operator-operator-674cf799c5-zmsdl
Picked up JAVA_TOOL_OPTIONS: -XX:+UnlockExperimentalVMOptions -XX:InitialRAMPercentage=30 -XX:MaxRAMPercentage=80 -XX:MinRAMPercentage=30
14:09:01.546 [main] DEBUG i.m.c.beans.DefaultBeanIntrospector - Found BeanIntrospection for type: class io.micronaut.health.HeartbeatDiscoveryClientCondition
14:09:01.573 [main] DEBUG i.m.c.beans.DefaultBeanIntrospector - Found BeanIntrospection for type: class io.micronaut.health.HeartbeatDiscoveryClientCondition
14:09:01.579 [ForkJoinPool.commonPool-worker-3] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class io.methvin.watchservice.MacOSXListeningWatchService
14:09:01.645 [ForkJoinPool.commonPool-worker-3] DEBUG io.micronaut.core.reflect.ClassUtils - Class io.methvin.watchservice.MacOSXListeningWatchService is not present
14:09:01.646 [ForkJoinPool.commonPool-worker-3] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class io.methvin.watchservice.MacOSXListeningWatchService
14:09:01.646 [ForkJoinPool.commonPool-worker-3] DEBUG io.micronaut.core.reflect.ClassUtils - Class io.methvin.watchservice.MacOSXListeningWatchService is not present
14:09:01.875 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class io.reactivex.Observable
14:09:01.876 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Successfully loaded class io.reactivex.Observable
14:09:01.938 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class reactor.core.publisher.Flux
14:09:01.939 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Class reactor.core.publisher.Flux is not present
14:09:01.939 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class kotlinx.coroutines.flow.Flow
14:09:01.940 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Class kotlinx.coroutines.flow.Flow is not present
14:09:01.940 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class io.reactivex.rxjava3.core.Flowable
14:09:01.941 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Class io.reactivex.rxjava3.core.Flowable is not present
14:09:01.941 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class io.reactivex.rxjava3.core.Observable
14:09:01.941 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Class io.reactivex.rxjava3.core.Observable is not present
14:09:01.942 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class io.reactivex.Single
14:09:01.942 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Successfully loaded class io.reactivex.Single
14:09:01.943 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class reactor.core.publisher.Mono
14:09:01.944 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Class reactor.core.publisher.Mono is not present
14:09:01.944 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class io.reactivex.Maybe
14:09:01.944 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Successfully loaded class io.reactivex.Maybe
14:09:01.944 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class io.reactivex.rxjava3.core.Single
14:09:01.945 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Class io.reactivex.rxjava3.core.Single is not present
14:09:01.945 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class io.reactivex.rxjava3.core.Maybe
14:09:01.946 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Class io.reactivex.rxjava3.core.Maybe is not present
14:09:01.946 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class io.reactivex.Completable
14:09:01.947 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Successfully loaded class io.reactivex.Completable
14:09:01.948 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Attempting to dynamically load class io.reactivex.rxjava3.core.Completable
14:09:01.948 [main] DEBUG io.micronaut.core.reflect.ClassUtils - Class io.reactivex.rxjava3.core.Completable is not present
14:09:01.950 [main] DEBUG i.m.web.router.DefaultRouteBuilder - Created Route: POST /api/v1/validate/hivemq-clusters -> ClusterValidationController#io.micronaut.context.DefaultBeanContext$4@53bc1328 (application/json )
14:09:02.262 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
14:09:02.262 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
14:09:02.266 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 2
14:09:02.266 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 2
14:09:02.266 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
14:09:02.266 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
14:09:02.266 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
14:09:02.266 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.tinyCacheSize: 512
14:09:02.267 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
14:09:02.267 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
14:09:02.267 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
14:09:02.267 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
14:09:02.267 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
14:09:02.267 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: true
14:09:02.267 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
14:09:02.284 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
14:09:02.285 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
14:09:02.285 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
14:09:02.647 [main] DEBUG i.n.c.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 2
14:09:02.663 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
14:09:02.663 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
14:09:02.759 [main] DEBUG i.m.h.server.netty.NettyHttpServer - Binding hivemq-operator server to *:443
14:09:02.785 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 1 (auto-detected)
14:09:02.790 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
14:09:02.790 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
14:09:02.797 [main] DEBUG io.netty.util.NetUtil - Loopback interface: lo (lo, 127.0.0.1)
14:09:02.799 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 4096
14:09:02.841 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 02:42:ac:ff:fe:11:00:03 (auto-detected)
14:09:02.965 [main] INFO  io.micronaut.runtime.Micronaut - Startup completed in 5450ms. Server Running: http://hivemq-hivemq-operator-operator-674cf799c5-zmsdl:443
14:09:03.071 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client from Kubernetes config...
14:09:03.072 [main] DEBUG io.fabric8.kubernetes.client.Config - Did not find Kubernetes config at: [/root/.kube/config]. Ignoring.
14:09:03.072 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client from service account...
14:09:03.072 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account host and port: 10.96.0.1:443
14:09:03.073 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account ca cert at: [/var/run/secrets/kubernetes.io/serviceaccount/ca.crt].
14:09:03.073 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account token at: [/var/run/secrets/kubernetes.io/serviceaccount/token].
14:09:03.073 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client namespace from Kubernetes service account namespace path...
14:09:03.073 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace].
14:09:05.439 [main] DEBUG i.f.k.c.internal.VersionUsageUtils - The client is using resource type 'customresourcedefinitions' with unstable version 'v1beta1'
14:09:07.561 [main] DEBUG com.hivemq.util.TemplateUtil - Templates will be read from 'null'
14:09:07.641 [main] INFO  com.hivemq.Operator - Operating from namespace 'default'
14:09:07.642 [main] INFO  com.hivemq.Operator - Initializing HiveMQ operator
14:09:07.669 [pool-1-thread-1] DEBUG i.f.k.c.d.i.WatchConnectionManager - Connecting websocket ... io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@35a1288b
14:09:07.669 [main] DEBUG i.f.k.c.d.i.WatchConnectionManager - Connecting websocket ... io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@67d32a54
14:09:07.962 [OkHttp https://10.96.0.1/...] DEBUG i.f.k.c.d.i.WatchConnectionManager - WebSocket successfully opened
14:09:07.962 [OkHttp https://10.96.0.1/...] DEBUG i.f.k.c.d.i.WatchConnectionManager - WebSocket successfully opened
14:09:07.963 [main] INFO  com.hivemq.Operator - Operator started in 321ms
14:09:07.963 [pool-1-thread-1] INFO  com.hivemq.AbstractWatcher - CustomResource watcher running for kinds HiveMQCluster
14:09:08.564 [pool-1-thread-2] INFO  com.hivemq.Operator - Syncing state for cluster hivemq
14:09:08.580 [pool-1-thread-3] DEBUG com.hivemq.util.DeploymentUtil - Trying to create service mqtt for cluster hivemq
14:09:08.581 [pool-1-thread-4] DEBUG com.hivemq.util.DeploymentUtil - Trying to create service cc for cluster hivemq
14:09:08.582 [pool-1-thread-5] DEBUG com.hivemq.util.DeploymentUtil - Trying to create service cluster for cluster hivemq
14:09:09.342 [pool-1-thread-5] DEBUG com.hivemq.util.DeploymentUtil - Service cluster is not of type LoadBalancer, will not monitor creation
14:09:09.344 [pool-1-thread-3] DEBUG com.hivemq.util.DeploymentUtil - Service mqtt is not of type LoadBalancer, will not monitor creation
14:09:09.362 [pool-1-thread-4] DEBUG com.hivemq.util.DeploymentUtil - Service cc is not of type LoadBalancer, will not monitor creation
14:09:09.363 [pool-1-thread-2] DEBUG c.h.operations.DeploymentOperable - Syncing deployment for cluster hivemq
14:09:09.548 [pool-1-thread-6] DEBUG com.hivemq.util.CrdUtil - Failed to update status
java.lang.NullPointerException: null
	at com.hivemq.util.CrdUtil.initializeOrGetConditions(CrdUtil.java:543)
	at com.hivemq.util.CrdUtil.lambda$updateStatusCondition$4(CrdUtil.java:519)
	at com.hivemq.util.CrdUtil.updateStatus(CrdUtil.java:469)
	at com.hivemq.util.CrdUtil.updateStatus(CrdUtil.java:435)
	at com.hivemq.util.CrdUtil.updateStatusCondition(CrdUtil.java:518)
	at com.hivemq.operations.ServiceOperable$1.onSuccess(ServiceOperable.java:71)
	at com.hivemq.operations.ServiceOperable$1.onSuccess(ServiceOperable.java:64)
	at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1089)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
14:09:12.541 [pool-1-thread-2] DEBUG com.hivemq.util.CrdUtil - Updating status on cluster hivemq: Applying Deployment, state: Updating
14:09:13.276 [pool-1-thread-2] INFO  com.hivemq.util.DeploymentUtil - Waiting for deployment hivemq to roll out...
14:09:13.354 [pool-1-thread-2] DEBUG com.hivemq.util.CrdUtil - Updating status condition on cluster hivemq of type AllNodesReady to status False with reason Waiting for roll-out to complete
14:09:13.642 [pool-1-thread-2] DEBUG com.hivemq.util.CrdUtil - Updating status on cluster hivemq: Waiting for deployment to become ready, ready: 0/1, state: Updating
14:09:13.644 [pool-1-thread-2] DEBUG i.f.k.c.d.i.WatchConnectionManager - Connecting websocket ... io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@7789b4ab
14:09:13.750 [OkHttp https://10.96.0.1/...] DEBUG i.f.k.c.d.i.WatchConnectionManager - WebSocket successfully opened
14:09:17.343 [OkHttp https://10.96.0.1/...] DEBUG com.hivemq.Operator - Interrupted while waiting for future
java.util.concurrent.TimeoutException: Waited 5 seconds (plus 313400 nanoseconds delay) for com.google.common.util.concurrent.TrustedListenableFutureTask@1d897dce[status=PENDING, info=[task=[running=[RUNNING ON pool-1-thread-2], java.util.concurrent.Executors$RunnableAdapter@c22de7d[Wrapped task = com.hivemq.Operator$$Lambda$450/0x00000001005fc440@54c8aa5a]]]]
	at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:506)
	at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:95)
	at com.hivemq.Operator.applyCluster(Operator.java:233)
	at com.hivemq.Operator.lambda$run$2(Operator.java:143)
	at com.hivemq.AbstractWatcher.handleAction(AbstractWatcher.java:201)
	at com.hivemq.AbstractWatcher$2.eventReceived(AbstractWatcher.java:140)
	at com.hivemq.AbstractWatcher$2.eventReceived(AbstractWatcher.java:129)
	at io.fabric8.kubernetes.client.utils.WatcherToggle.eventReceived(WatcherToggle.java:49)
	at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1.onMessage(WatchConnectionManager.java:235)
	at okhttp3.internal.ws.RealWebSocket.onReadMessage(RealWebSocket.java:323)
	at okhttp3.internal.ws.WebSocketReader.readMessageFrame(WebSocketReader.java:219)
	at okhttp3.internal.ws.WebSocketReader.processNextFrame(WebSocketReader.java:105)
	at okhttp3.internal.ws.RealWebSocket.loopReader(RealWebSocket.java:274)
	at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:214)
	at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
	at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
14:09:17.343 [OkHttp https://10.96.0.1/...] DEBUG com.hivemq.Operator - Cancelling current operation
14:09:17.345 [pool-1-thread-2] DEBUG i.f.k.c.d.i.WatchConnectionManager - Force closing the watch io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@7789b4ab
14:09:17.346 [pool-1-thread-2] DEBUG i.f.k.c.d.i.WatchConnectionManager - Closing websocket okhttp3.internal.ws.RealWebSocket@12d0df5
14:09:17.347 [pool-1-thread-7] INFO  com.hivemq.Operator - Syncing state for cluster hivemq
14:09:17.350 [pool-1-thread-2] DEBUG com.hivemq.Operator - State synchronization was cancelled by another modification
14:09:17.349 [pool-1-thread-8] DEBUG com.hivemq.util.DeploymentUtil - Trying to create service mqtt for cluster hivemq
14:09:17.351 [pool-1-thread-10] DEBUG com.hivemq.util.DeploymentUtil - Trying to create service cluster for cluster hivemq
14:09:17.351 [pool-1-thread-9] DEBUG com.hivemq.util.DeploymentUtil - Trying to create service cc for cluster hivemq
14:09:17.363 [OkHttp https://10.96.0.1/...] DEBUG i.f.k.c.d.i.WatchConnectionManager - WebSocket close received. code: 1000, reason:
14:09:17.364 [OkHttp https://10.96.0.1/...] DEBUG i.f.k.c.d.i.WatchConnectionManager - Ignoring onClose for already closed/closing websocket
14:09:17.641 [pool-1-thread-9] DEBUG com.hivemq.util.DeploymentUtil - Service cc is not of type LoadBalancer, will not monitor creation
14:09:17.662 [pool-1-thread-8] DEBUG com.hivemq.util.DeploymentUtil - Service mqtt is not of type LoadBalancer, will not monitor creation
14:09:17.739 [pool-1-thread-10] DEBUG com.hivemq.util.DeploymentUtil - Service cluster is not of type LoadBalancer, will not monitor creation
14:09:17.740 [pool-1-thread-7] DEBUG c.h.operations.DeploymentOperable - Syncing deployment for cluster hivemq
14:09:17.841 [pool-1-thread-11] DEBUG com.hivemq.util.CrdUtil - Updating status condition on cluster hivemq of type AllServicesReady to status True with reason Services transitioned to ready state
14:09:18.352 [pool-1-thread-7] DEBUG c.h.operations.DeploymentOperable - Proceeding with diff: []
14:09:18.385 [pool-1-thread-7] DEBUG com.hivemq.util.CrdUtil - Updating status on cluster hivemq: Applying Deployment, state: Updating
14:09:18.437 [pool-1-thread-7] DEBUG c.h.operations.DeploymentOperable - Live updating cluster hivemq...
14:09:18.438 [pool-1-thread-7] DEBUG c.h.operations.DeploymentOperable - Diff [] only contains hot-reloadable changes, checking if live changes are possible.
14:09:18.554 [pool-1-thread-7] DEBUG c.h.operations.live.ExtensionManager - Extension state unchanged
14:09:18.568 [pool-1-thread-7] DEBUG com.hivemq.util.CrdUtil - Updating status condition on cluster hivemq of type AllExtensionsLoaded to status True with reason Extension state unchanged in last update
14:09:18.639 [pool-1-thread-7] DEBUG c.h.operations.live.LogLevelManager - Not updating log level and conditions because diff [] does not indicate a log level change
14:09:18.640 [pool-1-thread-7] DEBUG c.h.operations.DeploymentOperable - Live updates completed for cluster hivemq
14:09:18.640 [pool-1-thread-7] INFO  com.hivemq.util.DeploymentUtil - Waiting for deployment hivemq to roll out...
14:09:18.660 [pool-1-thread-7] DEBUG com.hivemq.util.CrdUtil - Updating status condition on cluster hivemq of type AllNodesReady to status False with reason Waiting for roll-out to complete
14:09:18.780 [pool-1-thread-7] DEBUG com.hivemq.util.CrdUtil - Updating status on cluster hivemq: Waiting for deployment to become ready, ready: 0/1, state: Updating
14:09:18.839 [pool-1-thread-7] DEBUG i.f.k.c.d.i.WatchConnectionManager - Connecting websocket ... io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@66d2b0b3
14:09:19.042 [OkHttp https://10.96.0.1/...] DEBUG i.f.k.c.d.i.WatchConnectionManager - WebSocket successfully opened

I’m not an hivemq expert, I see some java runtime exception, but I do not think the problem lies in there, what do You think?

PS: sorry for the long post ^^

Can you run a kubectl describe pod hivemq-6bbf5ff6b9-vswll (or whatever the new name of the HiveMQ pod is)
I am guessing your K8s cluster is having trouble scheduling the Pod, most likely due to insufficient resources.
We’re aware of that exception in the log and it should be fixed in an upcoming release (it is nothing to worry about though)

oh I see, good point the one about resources, I pumped a brand new cluster with more resources:

❯ minikube start --cpus 4 --memory 8192

launching the hivemq operator

❯ helm upgrade --install hivemq hivemq/hivemq-operator --set hivemq.nodeCount=1

but still after a while I still not get the success status for the Cluster initialisation:

❯ kubectl get hivemq-clusters
NAME     SIZE   IMAGE            VERSION     STATUS   ENDPOINT   MESSAGE
hivemq   1      hivemq/hivemq4   k8s-4.4.3
❯ watch kubectl get hivemq-clusters
❯ kubectl get hivemq-clusters
NAME     SIZE   IMAGE            VERSION     STATUS     ENDPOINT   MESSAGE
hivemq   1      hivemq/hivemq4   k8s-4.4.3   Creating              Initial status

the pods and services look running:

❯ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
hivemq-hivemq-operator-operator-676c54cf9b-xxvf7   1/1     Running   0          3m53s
❯ kubectl get services
NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
hivemq-hivemq-cc           ClusterIP   10.97.159.74     <none>        8080/TCP   3m24s
hivemq-hivemq-cluster      ClusterIP   None             <none>        7000/TCP   3m24s
hivemq-hivemq-mqtt         ClusterIP   10.96.164.194    <none>        1883/TCP   3m24s
hivemq-operator-operator   ClusterIP   10.109.218.195   <none>        443/TCP    4m
kubernetes                 ClusterIP   10.96.0.1        <none>        443/TCP    4m34s

describing the cluster it says this:

❯ kubectl describe hivemq-clusters
Name:         hivemq
Namespace:    default
Labels:       app=hivemq-operator
              app.kubernetes.io/instance=hivemq
              app.kubernetes.io/managed-by=Helm

[...]

  Log Level:           INFO
  Memory:              4Gi
  Memory Limit Ratio:  1
  Mqtt:
    Keepalive Allow Unlimited:        true
    Keepalive Max:                    65535
    Max Packet Size:                  268435460
    Max Qos:                          2
    Queued Message Strategy:          discard
    Queued Messages Max Queue Size:   1000
    Retained Messages Enabled:        true
    Server Receive Maximum:           10
    Session Expiry Interval:          4294967295
    Shared Subscription Enabled:      true
    Subscription Identifier Enabled:  true
    Topic Alias Enabled:              true
    Topic Alias Max Per Client:       5
    Wildcard Subscription Enabled:    true
  Node Count:                         1
  Ports:
    Expose:  true
    Name:    mqtt
    Patch:
      [{"op":"add","path":"/spec/selector/hivemq.com~1node-offline","value":"false"},{"op":"add","path":"/metadata/annotations","value":{"service.spec.externalTrafficPolicy":"Local"}}]
    Port:    1883
    Expose:  true
    Name:    cc
    Patch:
      [{"op":"add","path":"/spec/sessionAffinity","value":"ClientIP"}]
    Port:                  8080
  Rest API Configuration:  <rest-api>
    <enabled>${HIVEMQ_REST_API_ENABLED}</enabled>
    <listeners>
        <http>
            <port>${HIVEMQ_REST_API_PORT}</port>
            <bind-address>0.0.0.0</bind-address>
        </http>
    </listeners>
</rest-api>

  Restrictions:
    Incoming Bandwidth Throttling:  0
    Max Client Id Length:           65535
    Max Connections:                -1
    Max Topic Length:               65535
    No Connect Idle Timeout:        10000
  Security:
    Allow Empty Client Id:              true
    Allow Request Problem Information:  true
    Payload Format Validation:          false
    Topic Format Validation:            true
  Service Account Name:                 hivemq-hivemq-operator-hivemq
Status:
  Conditions:
    Last Transition Time:  2020-11-24T15:07:47.829104Z
    Reason:                initial status
    Status:                False
    Type:                  AllNodesReady
    Last Transition Time:  2020-11-24T15:07:47.829104Z
    Reason:                initial status
    Status:                False
    Type:                  AllExtensionsLoaded
    Last Transition Time:  2020-11-24T15:07:47.830761Z
    Reason:                Services transitioned to ready state
    Status:                True
    Type:                  AllServicesReady
    Last Transition Time:  2020-11-24T15:07:47.829104Z
    Reason:                initial status
    Status:                True
    Type:                  LogLevelApplied
  Message:                 Initial status
  Port Status:
  State:  Creating
  Warnings:
Events:  <none>

do these: AllExtensionsLoaded --> False and AllNodesReady --> False ring any bell?