Kubernetes can't seem to upgrade pods correctly
June 3, 2018iscsi synology kubernetes
I don’t know what’s up with this one.
It’s simple. Edit the deploy, change the image, save and exit, and it should stop the running pod and launch a new one, right?
Except it doesn’t. It tries to start the new pod while keeping the old one running. This fails because the iSCSI volumes can’t be mounted in two places.
Easy enough…delete the old pod. That should have worked, but then it recreated the old pod and is still trying to start a new one.
Next option: scale to 0 and then back to 1.
Ok…except that now the iSCSI volume is still mounted by the kubelet on the other server….so the new one won’t mount.
Whiskey Tango Foxtrot, yo.
SSH into the server where the volume is mounted. Shell into the kubelet. Unmount the volume with:
iscsiadm -m node -T iqn.2006-04.us.monach:nas.plex-config -p 10.68.0.11 -u
After that everything mounts fine.
It’s not a very good automation framework if it requires manual intervention every time I look at it.