Stay tuned for more Kubernetes updates and best practices!
Overview
This upgrade guide assumes you have already upgraded to terraform-aws-modules/eks/aws v20 based on the EKS module v17 to v20 upgrade guide, and now you’re proceeding with the upgrade to v21.
Upgrade Procedure
The following scenario explains the process of upgrading terraform-aws-modules/eks/aws from v20.37.2 to v21.1.0.
Step 1: Update the EKS module version in your main Terraform code
# main.tf
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "21.1.0" # Bump from 20.37.2 to 21.1.0
}
Starting with terraform-aws-modules/eks/aws v21, it is fully compatible with AWS provider v6.x, so there is no need to force the AWS Provider version to be below 6.0 through versions.tf.
# versions.tf
terraform {
required_version = ">= 1.5.7"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 6.0.0"
}
tls = {
source = "hashicorp/tls"
version = ">= 4.0.0"
}
}
}
Step 2: Update Karpenter submodule if used
The Karpenter submodule is included in the EKS module. If you are also using the Karpenter submodule, match its version with the EKS module.
# main.tf
module "karpenter" {
source = "terraform-aws-modules/eks/aws//modules/karpenter"
version = "21.1.0" # Bump from 20.37.2 to 21.1.0
}
Step 3: Update variable names
The biggest change in EKS module v21 is the removal of the cluster_ prefix from all variable names.
Most variable names are affected by this change, so reflect the changes mentioned in the official guide in your Terraform code.
# main.tf
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "21.1.0" # Bump from 20.37.2 to 21.1.0
# cluster_name -> name
name = local.name
# cluster_version -> kubernetes_version
kubernetes_version = local.cluster_version
}
List of Variable Changes
-
Variables prefixed with
cluster_*have been stripped of the prefix to better match the underlying API:cluster_name→namecluster_version→kubernetes_versioncluster_enabled_log_types→enabled_log_typescluster_force_update_version→force_update_versioncluster_compute_config→compute_configcluster_upgrade_policy→upgrade_policycluster_remote_network_config→remote_network_configcluster_zonal_shift_config→zonal_shift_configcluster_additional_security_group_ids→additional_security_group_idscluster_endpoint_private_access→endpoint_private_accesscluster_endpoint_public_access→endpoint_public_accesscluster_endpoint_public_access_cidrs→endpoint_public_access_cidrscluster_ip_family→ip_familycluster_service_ipv4_cidr→service_ipv4_cidrcluster_service_ipv6_cidr→service_ipv6_cidrcluster_encryption_config→encryption_configcreate_cluster_primary_security_group_tags→create_primary_security_group_tagscluster_timeouts→timeoutscreate_cluster_security_group→create_security_groupcluster_security_group_id→security_group_idcluster_security_group_name→security_group_namecluster_security_group_use_name_prefix→security_group_use_name_prefixcluster_security_group_description→security_group_descriptioncluster_security_group_additional_rules→security_group_additional_rulescluster_security_group_tags→security_group_tagscluster_encryption_policy_use_name_prefix→encryption_policy_use_name_prefixcluster_encryption_policy_name→encryption_policy_namecluster_encryption_policy_description→encryption_policy_descriptioncluster_encryption_policy_path→encryption_policy_pathcluster_encryption_policy_tags→encryption_policy_tagscluster_addons→addonscluster_addons_timeouts→addons_timeoutscluster_identity_providers→identity_providers
-
eks-managed-node-group sub-module
cluster_version→kubernetes_version
-
self-managed-node-group sub-module
cluster_version→kubernetes_versiondelete_timeout→timeouts
-
fargate-profile and karpenter sub-module
- None
Step 4: Remove self_managed_node_group_defaults
Starting from terraform-aws-modules/eks/aws v21.x, self_managed_node_group_defaults is no longer supported.
# main.tf
module "eks" {
# self_managed_node_group_defaults = {
# ami_type = "AL2023_x86_64_STANDARD"
# vpc_security_group_ids = [aws_security_group.sg_eks_worker_from_lb.id]
# }
}
Step 5: Add launch_template and vpc_security_group_ids to mixed_instances_policy
Add the launch_template value and vpc_security_group_ids value to the mixed_instances_policy. Repeat this process if there are multiple node groups declared in self_managed_node_groups.
# main.tf
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "21.1.0" # Bump from 20.37.2 to 21.1.0
self_managed_node_groups = {
my_nodegroup_1 = {
vpc_security_group_ids = [
aws_security_group.sg_eks_worker_from_lb.id
]
mixed_instances_policy = {
# Add launch_template in mixed_instances_policy
launch_template = {
override = [
{
instance_requirements = {
cpu_manufacturers = ["intel"]
instance_generations = ["current", "previous"]
spot_max_price_percentage_over_lowest_price = 100
vcpu_count = {
min = 1
}
allowed_instance_types = ["t*", "m*"]
}
}
]
}
}
}
}
}
Step 6: Configure metadata_options for IMDS access
Starting from EKS module v21, the default http_put_response_hop_limit has been changed from 2 to 1. This change can cause issues with pods accessing the Instance Metadata Service (IMDS).
Why this matters:
- Pods running on EC2 instances need to access IMDS through the node
- A hop limit of
1only allows the instance itself to access IMDS - A hop limit of
2is required for containers/pods to access IMDS through the node
Solution:
Add metadata_options to your node groups (self-managed, EKS-managed, or both):
# main.tf - For self-managed node groups
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "21.1.0"
self_managed_node_groups = {
my_nodegroup_1 = {
vpc_security_group_ids = [
aws_security_group.sg_eks_worker_from_lb.id
]
# Add metadata_options to allow pods to access IMDS
metadata_options = {
http_put_response_hop_limit = 2 # Required for pods to access IMDS
}
mixed_instances_policy = {
launch_template = {
override = [
{
instance_requirements = {
cpu_manufacturers = ["intel"]
instance_generations = ["current", "previous"]
spot_max_price_percentage_over_lowest_price = 100
vcpu_count = {
min = 1
}
allowed_instance_types = ["t*", "m*"]
}
}
]
}
}
}
}
}
# main.tf - For EKS-managed node groups
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "21.1.0"
eks_managed_node_groups = {
my_eks_nodegroup = {
# Add metadata_options to allow pods to access IMDS
metadata_options = {
http_put_response_hop_limit = 2 # Required for pods to access IMDS
}
}
}
}
Important: If you don’t configure this, workloads using IAM roles for service accounts (IRSA) or accessing instance metadata may fail.
Step 7: Enable deletion protection
Starting from Terraform EKS module v21.1.0, EKS Deletion Protection is supported, so it is recommended to set the deletion_protection value as shown below. This can prevent accidental cluster deletion.
# main.tf
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "21.1.0"
name = local.name
kubernetes_version = local.cluster_version
deletion_protection = true
}
Step 8: Apply the changes
Once all changes have been applied for version 21.1.0, perform the EKS module version upgrade.
terraform init -upgrade
terraform plan
terraform apply
Conclusion
The EKS Terraform module has many features and a complex structure. If the interval between major versions becomes too wide, the upgrade work becomes exponentially more complex. In particular, if you skip several major versions, breaking changes accumulate, making migration work very difficult.
Therefore, in environments using the EKS Terraform module, it is important to keep up with upgrades whenever major versions are released. This allows you to gradually apply the changes in each upgrade step and reduce the overall maintenance burden.
References
- Upgrade from v20.x to v21.x - AWS EKS Terraform module