Palette Dev Engine (PDE)
Use the following content to help you troubleshoot issues you may encounter when using Palette Dev Engine (PDE).
Resource Requests
All Cluster Groups are configured with a default
LimitRange. The LimitRange configuration is in the Cluster
Group's Virtual Cluster configuration section. Packs deployed to a virtual cluster should have the resources:
section
defined in the values.yaml file. Pack authors must specify the requests
and limits
or omit the section entirely
to let the system manage the resources.
If you specify requests
but not limits
, the default limits imposed by the LimitRange will likely be lower than the
requests, causing the following error.
Invalid value: "300m": must be less than or equal to CPU limit spec.containers[0].resources.requests: Invalid value: "512Mi": must be less than or equal to memory limit
The workaround is to define both the requests
and limits
.
Scenario - Controller Manager Pod Not Upgraded
If the palette-controller-manager
pod for a virtual cluster is not upgraded after a Palette platform upgrade, use the
following steps to resolve the issue.
Debug Steps
-
Ensure you can connect to the host cluster using the cluster's kubeconfig file. Refer to the Access Cluster with CLI guide for additional guidance.
-
Identify the namespace where the virtual cluster is active. Use the virtual cluster's ID to identify the correct namespace. Use the following to extract the namespace. Make sure you get the correct namespace for the virtual cluster and not the main host cluster namespace.
kubectl get pods --all-namespaces | grep cluster-management-agent
In this example, the virtual cluster ID is
666c92d18b802543a124513d
.cluster-666c89f28b802503dc8542d3 cluster-management-agent-f766467f4-8prd6 1/1 Running 1 (29m ago) 30m
cluster-666c92d18b802543a124513d cluster-management-agent-f766467f4-8v577 1/1 Running 0 4m13stipYou can find the virtual cluster ID in the URL when you access the virtual cluster in the Palette UI. From the left Main Menu, click on Cluster Groups and select the cluster group hosting your virtual cluster. Click on the virtual cluster name to access the virtual cluster. The URL will contain the virtual cluster ID.
-
Scale down the
cluster-management-agent
deployment to 0. Replace<namespace>
with the namespace of the virtual cluster.kubectl scale deployment cluster-management-agent --replicas=0 --namespace <namespace>
deployment.apps/cluster-management-agent scaled
-
Edit the
palette-controller-manager
deployment and under the resources section for themanager
andatop
add theephemeral-storage
field and the1Gi
value.name: manager
resources:
limits:
ephemeral-storage: 1Giname: atop-manager
resources:
limits:
ephemeral-storage: 1GiYou can use the following command to edit the deployment. Press
i
to enter insert mode, make the necessary changes, and then pressEsc
followed by:wq
to save and exit.kubectl edit deployment palette-controller-manager --namespace <namespace>
deployment.apps/palette-controller-manager edited
-
Wait for the new
palette-manager
pod to become healthy and active. -
Scale up the
cluster-management-agent
deployment to 1. Replace<namespace>
with the namespace of the virtual cluster.kubectl scale deployment cluster-management-agent --replicas=1 --namespace <namespace>
deployment.apps/cluster-management-agent scaled