-
Notifications
You must be signed in to change notification settings - Fork 4.5k
aws-eks: EKS Auto Mode - IAM role should not be created when default node pools are disabled #33771
Description
Describe the bug
When provisioning an EKS cluster in Auto Mode using the L2 construct within the aws_eks_v2_alpha library, if the default node pools are disabled the corresponding IAM Role for the nodes (nodeRole) should also not be created. Currently, the node role will created regardless and this cause a stack deployment failure as the request is essentially invalid.
For context. Within our environment, we use separate subnet groups for the EKS control plane, workers, and ingress components. The rationale for removing the default node pools is because the corresponding auto-generated NodeClass is defaulting to the control plane subnets. Also, we want to ensure nodes are using a specific KMS CMK for EBS encryption. So we plan to create our own customised versions of the 'system' and 'general' pools accordingly.
Regression Issue
- Select this option if this issue appears to be a regression.
Last Known Working CDK Version
No response
Expected Behavior
When the default node pools have explicitly been disabled, the supporting IAM role (nodeRole) should not be created.
Current Behavior
With the node pool disabled, the IAM role will still attempt to be created by the L2 construct, this will cause a stack deployment failure with the following message:
When Compute Config nodeRoleArn is not null or empty, nodePool value(s) must be provided
Reproduction Steps
Attempt to provision an EKS cluster using the L2 aws_eks_v2_alpha construct and specify an empty node pool within the compute config:
new eks_v2.Cluster(
...
compute=eks_v2.ComputeConfig(nodePools=[])
)
Possible Solution
Add an additional check before creating the node role as to the status of the default node pools, either here:
| nodeRoleArn: !autoModeEnabled ? undefined : props.compute?.nodeRole?.roleArn ?? this.addNodePoolRole(`${id}nodePoolRole`).roleArn, |
Or here:
aws-cdk/packages/@aws-cdk/aws-eks-v2-alpha/lib/cluster.ts
Lines 1609 to 1620 in 4128ff4
| private addNodePoolRole(id: string): iam.Role { | |
| const role = new iam.Role(this, id, { | |
| assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'), | |
| // to be able to access the AWSLoadBalancerController | |
| managedPolicies: [ | |
| // see https://docs.aws.amazon.com/eks/latest/userguide/automode-get-started-cli.html#auto-mode-create-roles | |
| iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKSWorkerNodePolicy'), | |
| iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEC2ContainerRegistryReadOnly'), | |
| ], | |
| }); | |
| return role; |
Additional Information/Context
Workaround
Use a CDK escape hatch to remove the node role reference within the cluster properties:
cluster.node.defaultChild.addDeletionOverride('Properties.ComputeConfig.NodeRoleArn')
This will allow the cluster to be provisioned successfully without any default node groups or the corresponding IAM role.
CDK CLI Version
2.1002.0 (build 09ef5a0)
Framework Version
No response
Node.js Version
v22.14.0
OS
MacOS
Language
Python
Language Version
3.11.11
Other information
No response