Stay tuned for more Kubernetes updates and best practices!

Overview

This upgrade guide assumes you have already upgraded to terraform-aws-modules/eks/aws v20 based on the EKS module v17 to v20 upgrade guide, and now you’re proceeding with the upgrade to v21.

Upgrade Procedure

The following scenario explains the process of upgrading terraform-aws-modules/eks/aws from v20.37.2 to v21.9.0.

Step 1: Update the EKS module version in your main Terraform code

# main.tf
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "21.9.0" # Bump from 20.37.2 to 21.9.0
}

Starting with terraform-aws-modules/eks/aws v21, it is fully compatible with AWS provider v6.x, so there is no need to force the AWS Provider version to be below 6.0 through versions.tf.

# versions.tf
terraform {
  required_version = ">= 1.5.7"
  
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 6.0.0"
    }
    tls = {
      source  = "hashicorp/tls"
      version = ">= 4.0.0"
    }
  }
}

Step 2: Update Karpenter submodule if used

The Karpenter submodule is included in the EKS module. If you are also using the Karpenter submodule, match its version with the EKS module.

# main.tf
module "karpenter" {
  source  = "terraform-aws-modules/eks/aws//modules/karpenter"
  version = "21.9.0" # Bump from 20.37.2 to 21.9.0
}

Step 3: Update variable names

The biggest change in EKS module v21 is the removal of the cluster_ prefix from all variable names. Most variable names are affected by this change, so you’ll need to update your Terraform code accordingly.

# main.tf
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "21.9.0" # Bump from 20.37.2 to 21.9.0

  # cluster_name -> name
  name = local.name

  # cluster_version -> kubernetes_version
  kubernetes_version = local.cluster_version
}

Note: See the complete variable mapping table below for all variable name changes.

Step 4: Remove eks_managed_node_group_defaults and self_managed_node_group_defaults

Starting from terraform-aws-modules/eks/aws v21.x, both eks_managed_node_group_defaults and self_managed_node_group_defaults are no longer supported. You must merge all default values directly into each node group configuration.

Before (v20):

module "eks" {
  eks_managed_node_group_defaults = {
    ami_type        = "AL2023_x86_64_STANDARD"
    disk_size       = 50
    create_iam_role = false
    iam_role_arn    = "arn:aws:iam::ACCOUNT:role/node-role"
  }

  eks_managed_node_groups = {
    my_nodegroup = {
      node_group_name = "my-nodegroup"
      capacity_type   = "SPOT"
      # Other configs inherit from defaults
    }
  }
}

After (v21):

module "eks" {
  eks_managed_node_groups = {
    my_nodegroup = {
      node_group_name = "my-nodegroup"
      capacity_type   = "SPOT"
      
      # Merge defaults directly into node group
      ami_type        = "AL2023_x86_64_STANDARD"
      disk_size       = 50
      create_iam_role = false
      iam_role_arn    = "arn:aws:iam::ACCOUNT:role/node-role"
      
      # Add metadata_options for IMDS access
      metadata_options = {
        http_tokens                 = "required"
        http_put_response_hop_limit = 2
      }
    }
  }
}

For self-managed node groups:

module "eks" {
  # self_managed_node_group_defaults = {  # ❌ REMOVED in v21
  #   ami_type               = "AL2023_x86_64_STANDARD"
  #   vpc_security_group_ids = [aws_security_group.sg_eks_worker_from_lb.id]
  # }
  
  self_managed_node_groups = {
    my_nodegroup = {
      # Merge defaults here instead
      ami_type               = "AL2023_x86_64_STANDARD"
      vpc_security_group_ids = [aws_security_group.sg_eks_worker_from_lb.id]
    }
  }
}

Step 5: Update self-managed node groups (if applicable)

Note: This step only applies if you’re using self_managed_node_groups. Skip if you only use EKS-managed node groups.

For self-managed node groups, you need to add launch_template and vpc_security_group_ids to the mixed_instances_policy. Repeat this process for each node group declared in self_managed_node_groups.

# main.tf
module "eks" {
    source  = "terraform-aws-modules/eks/aws"
    version = "21.9.0" # Bump from 20.37.2 to 21.9.0

    self_managed_node_groups = {
        my_nodegroup_1 = {
            vpc_security_group_ids = [
                aws_security_group.sg_eks_worker_from_lb.id
            ]

            mixed_instances_policy = {
                # Add launch_template in mixed_instances_policy
                launch_template = {
                    override = [
                        {
                            instance_requirements = {
                                cpu_manufacturers                           = ["intel"]
                                instance_generations                        = ["current", "previous"]
                                spot_max_price_percentage_over_lowest_price = 100

                                vcpu_count = {
                                    min = 1
                                }

                                allowed_instance_types = ["t*", "m*"]
                            }
                        }
                    ]
                }
            }
        }
    }
}

Step 6: Configure metadata_options for IMDS access

Starting from EKS module v21, the default http_put_response_hop_limit has been changed from 2 to 1. This change can cause issues with pods accessing the Instance Metadata Service (IMDS).

Why this matters:

  • Pods running on EC2 instances need to access IMDS through the node
  • A hop limit of 1 only allows the instance itself to access IMDS
  • A hop limit of 2 is required for containers/pods to access IMDS through the node

Solution: Add metadata_options to your node groups (self-managed, EKS-managed, or both):

# main.tf - For self-managed node groups
module "eks" {
    source  = "terraform-aws-modules/eks/aws"
    version = "21.9.0"

    self_managed_node_groups = {
        my_nodegroup_1 = {
            vpc_security_group_ids = [
                aws_security_group.sg_eks_worker_from_lb.id
            ]

            # Add metadata_options to allow pods to access IMDS
            metadata_options = {
                http_put_response_hop_limit = 2  # Required for pods to access IMDS
            }

            mixed_instances_policy = {
                launch_template = {
                    override = [
                        {
                            instance_requirements = {
                                cpu_manufacturers                           = ["intel"]
                                instance_generations                        = ["current", "previous"]
                                spot_max_price_percentage_over_lowest_price = 100

                                vcpu_count = {
                                    min = 1
                                }

                                allowed_instance_types = ["t*", "m*"]
                            }
                        }
                    ]
                }
            }
        }
    }
}
# main.tf - For EKS-managed node groups
module "eks" {
    source  = "terraform-aws-modules/eks/aws"
    version = "21.9.0"

    eks_managed_node_groups = {
        my_eks_nodegroup = {
            # Add metadata_options to allow pods to access IMDS
            metadata_options = {
                http_put_response_hop_limit = 2  # Required for pods to access IMDS
            }
        }
    }
}

Step 7: Configure addons with conflict resolution

In v21, addons may have conflicts during creation. Add resolve_conflicts_on_create = "OVERWRITE" to all addons:

addons = {
  coredns = {
    addon_version              = "v1.12.1-eksbuild.2"
    resolve_conflicts_on_create = "OVERWRITE"  # Required to avoid conflicts
  }
  kube_proxy = {
    addon_version              = "v1.33.5-eksbuild.1"
    resolve_conflicts_on_create = "OVERWRITE"
  }
  # ... other addons
}

Starting from Terraform EKS module v21.9.0, EKS Deletion Protection is supported. It’s recommended to enable this to prevent accidental cluster deletion.

# main.tf
module "eks" {
    source  = "terraform-aws-modules/eks/aws"
    version = "21.9.0"

    name                = local.name
    kubernetes_version  = local.cluster_version
    deletion_protection = true
}

Step 9: Review and apply changes

⚠️ CRITICAL: Review the plan carefully before applying!

terraform init -upgrade
terraform plan -out=upgrade.plan
terraform apply upgrade.plan

Important: If the plan shows cluster recreation (aws_eks_cluster must be replaced), DO NOT APPLY. Check encryption_config and keep existing KMS key ARN.

Variable Mapping Table

v20.x Variable v21.x Variable Status Notes
Cluster Configuration
cluster_name name ✅ Renamed Strip cluster_ prefix
cluster_version kubernetes_version ✅ Renamed Strip cluster_ prefix
cluster_enabled_log_types enabled_log_types ✅ Renamed Strip cluster_ prefix
cluster_endpoint_public_access endpoint_public_access ✅ Renamed Strip cluster_ prefix
cluster_endpoint_public_access_cidrs endpoint_public_access_cidrs ✅ Renamed Strip cluster_ prefix
cluster_endpoint_private_access endpoint_private_access ✅ Renamed Strip cluster_ prefix
Security Groups
cluster_security_group_id security_group_id ✅ Renamed Strip cluster_ prefix
cluster_security_group_tags security_group_tags ✅ Renamed Strip cluster_ prefix
create_cluster_security_group create_security_group ✅ Renamed Strip cluster_ prefix
cluster_security_group_name security_group_name ✅ Renamed Strip cluster_ prefix
cluster_security_group_use_name_prefix security_group_use_name_prefix ✅ Renamed Strip cluster_ prefix
cluster_security_group_description security_group_description ✅ Renamed Strip cluster_ prefix
cluster_security_group_additional_rules security_group_additional_rules ✅ Renamed Strip cluster_ prefix
Encryption
cluster_encryption_config encryption_config ✅ Renamed ⚠️ CRITICAL: Behavior changed: {}null. Changing this will RECREATE the cluster! Keep existing KMS key ARN to avoid recreation.
Addons
cluster_addons addons ✅ Renamed Strip cluster_ prefix
cluster_addons_timeouts addons_timeouts ✅ Renamed Strip cluster_ prefix
Identity Providers
cluster_identity_providers identity_providers ✅ Renamed Strip cluster_ prefix
Node Groups
eks_managed_node_group_defaults REMOVED REMOVED Merge defaults into each node group
self_managed_node_group_defaults REMOVED REMOVED Merge defaults into each node group
enable_efa_support (cluster level) REMOVED REMOVED Set enable_efa_support only at node group level
enable_security_groups_for_pods REMOVED REMOVED Use IAM policy AmazonEKSVPCResourceController instead
bootstrap_self_managed_addons REMOVED REMOVED Hardcoded to false (legacy, now uses EKS addons API)
Other
cluster_ip_family ip_family ✅ Renamed Strip cluster_ prefix
cluster_service_ipv4_cidr service_ipv4_cidr ✅ Renamed Strip cluster_ prefix
cluster_service_ipv6_cidr service_ipv6_cidr ✅ Renamed Strip cluster_ prefix
cluster_timeouts timeouts ✅ Renamed Strip cluster_ prefix
cluster_force_update_version force_update_version ✅ Renamed Strip cluster_ prefix
cluster_compute_config compute_config ✅ Renamed Strip cluster_ prefix
cluster_upgrade_policy upgrade_policy ✅ Renamed Strip cluster_ prefix
cluster_remote_network_config remote_network_config ✅ Renamed Strip cluster_ prefix
cluster_zonal_shift_config zonal_shift_config ✅ Renamed Strip cluster_ prefix
cluster_additional_security_group_ids additional_security_group_ids ✅ Renamed Strip cluster_ prefix

Conclusion

The EKS Terraform module has many features and a complex structure. If the interval between major versions becomes too wide, the upgrade work becomes exponentially more complex. In particular, if you skip several major versions, breaking changes accumulate, making migration work very difficult.

Therefore, in environments using the EKS Terraform module, it is important to keep up with upgrades whenever major versions are released. This allows you to gradually apply the changes in each upgrade step and reduce the overall maintenance burden.

References