After installing the Remote CLI client, the user is unable to run the command 'kubectl apply -f ' correctly. The command returns an error message related to the resources. The k8s client version in the remote CLI client is older than the system. It should be upgraded.
In RunAgent(RunAgent2):
kubectl apply -f custom_apps/hellokitty.yaml
error: error validating "custom_apps/hellokitty.yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
kubectl version --short
Client Version: v1.5.2
Server Version: v1.21.3
On controller-0 (r430_3_4):
kubectl version --short
Client Version: v1.21.3
Server Version: v1.21.3
Expected Behavior
kubectl apply -f custom_apps/hellokitty.yaml
pod/hellokitty created
Actual Behavior
kubectl apply -f custom_apps/hellokitty.yaml
error: error validating "custom_apps/hellokitty.yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
Reproducibility
yes
System Configuration
all configurations
Branch/Pull Time/Commit
Branch and the time when code was pulled or git commit or cengn load info
Brief Description
After installing the Remote CLI client, the user is unable to run the command 'kubectl apply -f ' correctly. The command returns an error message related to the resources. The k8s client version in the remote CLI client is older than the system. It should be upgraded.
Severity
<Major: System/Feature is usable but degraded>
Steps to Reproduce
In controller-0 (r430_3_4):
USER="admin-user" FILE=temp- kubeconfig
OUTPUT_
cat <<EOF > admin-login.yaml ion.k8s. io/v1 ion.k8s. io
apiVersion: v1
kind: ServiceAccount
metadata:
name: ${USER}
namespace: kube-system
—
apiVersion: rbac.authorizat
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorizat
kind: ClusterRole
name: cluster-admin
subjects:
kind: ServiceAccount
name: ${USER}
namespace: kube-system
EOF
kubectl apply -f admin-login.yaml
TOKEN_DATA= $(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep ${USER} | awk '{print $1}') | grep "token:" | awk '{print $2}')
sudo kubectl config --kubeconfig ${OUTPUT_FILE} set-cluster wrcp-cluster --server=https://[2620:10a: a001:a103: :11]:6443 --insecure- skip-tls- verify
sudo kubectl config --kubeconfig ${OUTPUT_FILE} set-credentials ${USER} --token=$TOKEN_DATA
sudo kubectl config --kubeconfig ${OUTPUT_FILE} set-context ${USER} @wrcp-cluster --cluster= wrcp-cluster --user ${USER} --namespace=default
sudo kubectl config --kubeconfig ${OUTPUT_FILE} use-context ${USER} @wrcp-cluster
sudo chmod a+rw temp-kubeconfig
In RunAgent( RunAgent2) :
scp -6 -o StrictHostKeyCh ecking= no -o UserKnownHostsF ile=/dev/ null sysadmin@ [2620:10a: a001:a103: :11]:temp- kubeconfig /home/cumulus/ repositories/ cgcs-remote- cli/remote- client/ RegionOne
source /home/cumulus/ repositories/ cgcs-remote- cli/remote- client/ RegionOne/ docker_ image_version. sh
cd /home/cumulus/ repositories/ cgcs-remote- cli/remote- client/ RegionOne
./configure_ client. sh -k temp-kubeconfig -w /home/cumulus/ repositories/ cgcs-remote- cli/remote- client/ RegionOne/ remote_ wd -p tis-lab- registry. cumulus. wrs.com: 9001/wrcp- staging/ $PLATFORM_ DOCKER_ IMAGE
source remote_ client_ platform. sh
Li69nux*
system host-list ------- ------- ------- ------- ------- ------- ------- ------- -+
-------
id hostname personality administrative operational availability ------- ------- ------- ------- ------- ------- ------- ------- -+
-------
1 controller-0 controller unlocked enabled available ------- ------- ------- ------- ------- ------- ------- ------- -+
2 controller-1 controller unlocked enabled available
-------
cd /home/cumulus/ repositories/ cgcs-remote- cli/remote- client/ RegionOne/ remote_ wd/
scp -o StrictHostKeyCh ecking= no -o UserKnownHostsF ile=/dev/ null -r svc-cgcsauto@ 128.224. 150.21: /sandbox/ custom_ apps/ /home/cumulus/ repositories/ cgcs-remote- cli/remote- client/ RegionOne/ remote_ wd/custom_ apps
In controller-0 (r430_3_4):
sudo su -
mkdir -p /wd/
scp -6 -o StrictHostKeyCh ecking= no -o UserKnownHostsF ile=/dev/ null -r svc-cgcsauto@ [2620:10a: a001:a103: :21]:/sandbox/ custom_ apps/ /wd/
source /etc/platform/ openrc
In RunAgent( RunAgent2) : apps/hellokitty .yaml apps/hellokitty .yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
kubectl apply -f custom_
error: error validating "custom_
kubectl version --short
Client Version: v1.5.2
Server Version: v1.21.3
On controller-0 (r430_3_4):
kubectl version --short
Client Version: v1.21.3
Server Version: v1.21.3
Expected Behavior
kubectl apply -f custom_ apps/hellokitty .yaml
pod/hellokitty created
Actual Behavior
kubectl apply -f custom_ apps/hellokitty .yaml apps/hellokitty .yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
error: error validating "custom_
Reproducibility
yes
System Configuration
all configurations
Branch/Pull Time/Commit
Branch and the time when code was pulled or git commit or cengn load info
Last Pass
-
Timestamp/Logs
-
Alarms
No alarms
Test Activity
[Regression Testing]
Workaround
-