You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thanks for the great work and effort that has gone in adopting Kasm Workspaces to run in Kubernetes.
As an improvement, I'd suggest to schedule kasm-manager to run as a statefulset instead of a deployment. This will help in having persistent managers list registered on the api server instead of the current behavior.
Scheduling the manager as a deployment causes the pods to obtain new names each time they restart resulting in managers registered in the api server but marked as missing. These managers will never be re-registered again on the api server as there is almost no chance to retain the old manager's name.
The screenshot attached shows several missing managers as a result of several restarts.
The text was updated successfully, but these errors were encountered:
The managers are supposed to self deregister on shutdown. Internally there is a sigtrap that will gracefully deregister on pod shutdown. What flavor of Kubernetes are you running on, and is there anything special about your environment or configuration that you think is relevant given that information?
I am using Talos Linux as the kubernetes distribution. What happened is that I was rebooting the nodes as a result of rolling two hop upgrades to the cluster, so I was evicting the pods and draining the nodes. All nodes rebooted twice. I didn't notice the additional kasm-managers until two days ago.
BTW, this also happened to the rdp-gateway and kasm-guac, I found several instances of them registered.
Greetings team,
First of all, thanks for the great work and effort that has gone in adopting Kasm Workspaces to run in Kubernetes.
As an improvement, I'd suggest to schedule kasm-manager to run as a statefulset instead of a deployment. This will help in having persistent managers list registered on the api server instead of the current behavior.
Scheduling the manager as a deployment causes the pods to obtain new names each time they restart resulting in managers registered in the api server but marked as missing. These managers will never be re-registered again on the api server as there is almost no chance to retain the old manager's name.
The screenshot attached shows several missing managers as a result of several restarts.
The text was updated successfully, but these errors were encountered: