I'm not having any luck reproducing this. Under normal conditions, if there are too many pods, the NodeAffinity error doesn't occur because kube-scheduler is smart enough not to schedule multiple conflicting pods onto the same node. Somehow, this deployment hit a corner case where kube-scheduler thought that the node had room for a new pod, but kubelet thought that it had a conflict.
I'm not having any luck reproducing this. Under normal conditions, if there are too many pods, the NodeAffinity error doesn't occur because kube-scheduler is smart enough not to schedule multiple conflicting pods onto the same node. Somehow, this deployment hit a corner case where kube-scheduler thought that the node had room for a new pod, but kubelet thought that it had a conflict.