Stay tuned for more Kubernetes updates and best practices!

Overview

This upgrade guide assumes you have already upgraded to terraform-aws-modules/eks/aws v20 based on the EKS module v17 to v20 upgrade guide, and now you’re proceeding with the upgrade to v21.

Upgrade Procedure

The following scenario explains the process of upgrading terraform-aws-modules/eks/aws from v20.37.2 to v21.1.0.

Step 1: Update the EKS module version in your main Terraform code

# main.tf
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "21.1.0" # Bump from 20.37.2 to 21.1.0
}

Starting with terraform-aws-modules/eks/aws v21, it is fully compatible with AWS provider v6.x, so there is no need to force the AWS Provider version to be below 6.0 through versions.tf.

# versions.tf
terraform {
  required_version = ">= 1.5.7"
  
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 6.0.0"
    }
    tls = {
      source  = "hashicorp/tls"
      version = ">= 4.0.0"
    }
  }
}

Step 2: Update Karpenter submodule if used

The Karpenter submodule is included in the EKS module. If you are also using the Karpenter submodule, match its version with the EKS module.

# main.tf
module "karpenter" {
  source  = "terraform-aws-modules/eks/aws//modules/karpenter"
  version = "21.1.0" # Bump from 20.37.2 to 21.1.0
}

Step 3: Update variable names The biggest change in EKS module v21 is the removal of the cluster_ prefix from all variable names.

Most variable names are affected by this change, so reflect the changes mentioned in the official guide in your Terraform code.

# main.tf
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "21.1.0" # Bump from 20.37.2 to 21.1.0

  # cluster_name -> name
  name = local.name

  # cluster_version -> kubernetes_version
  kubernetes_version = local.cluster_version
}

List of Variable Changes

  • Variables prefixed with cluster_* have been stripped of the prefix to better match the underlying API:

    • cluster_namename
    • cluster_versionkubernetes_version
    • cluster_enabled_log_typesenabled_log_types
    • cluster_force_update_versionforce_update_version
    • cluster_compute_configcompute_config
    • cluster_upgrade_policyupgrade_policy
    • cluster_remote_network_configremote_network_config
    • cluster_zonal_shift_configzonal_shift_config
    • cluster_additional_security_group_idsadditional_security_group_ids
    • cluster_endpoint_private_accessendpoint_private_access
    • cluster_endpoint_public_accessendpoint_public_access
    • cluster_endpoint_public_access_cidrsendpoint_public_access_cidrs
    • cluster_ip_familyip_family
    • cluster_service_ipv4_cidrservice_ipv4_cidr
    • cluster_service_ipv6_cidrservice_ipv6_cidr
    • cluster_encryption_configencryption_config
    • create_cluster_primary_security_group_tagscreate_primary_security_group_tags
    • cluster_timeoutstimeouts
    • create_cluster_security_groupcreate_security_group
    • cluster_security_group_idsecurity_group_id
    • cluster_security_group_namesecurity_group_name
    • cluster_security_group_use_name_prefixsecurity_group_use_name_prefix
    • cluster_security_group_descriptionsecurity_group_description
    • cluster_security_group_additional_rulessecurity_group_additional_rules
    • cluster_security_group_tagssecurity_group_tags
    • cluster_encryption_policy_use_name_prefixencryption_policy_use_name_prefix
    • cluster_encryption_policy_nameencryption_policy_name
    • cluster_encryption_policy_descriptionencryption_policy_description
    • cluster_encryption_policy_pathencryption_policy_path
    • cluster_encryption_policy_tagsencryption_policy_tags
    • cluster_addonsaddons
    • cluster_addons_timeoutsaddons_timeouts
    • cluster_identity_providersidentity_providers
  • eks-managed-node-group sub-module

    • cluster_versionkubernetes_version
  • self-managed-node-group sub-module

    • cluster_versionkubernetes_version
    • delete_timeouttimeouts
  • fargate-profile and karpenter sub-module

    • None

Step 4: Remove self_managed_node_group_defaults

Starting from terraform-aws-modules/eks/aws v21.x, self_managed_node_group_defaults is no longer supported.

# main.tf
module "eks" {
  # self_managed_node_group_defaults = {
  #   ami_type               = "AL2023_x86_64_STANDARD"
  #   vpc_security_group_ids = [aws_security_group.sg_eks_worker_from_lb.id]
  # }
}

Step 5: Add launch_template and vpc_security_group_ids to mixed_instances_policy

Add the launch_template value and vpc_security_group_ids value to the mixed_instances_policy. Repeat this process if there are multiple node groups declared in self_managed_node_groups.

# main.tf
module "eks" {
    source  = "terraform-aws-modules/eks/aws"
    version = "21.1.0" # Bump from 20.37.2 to 21.1.0

    self_managed_node_groups = {
        my_nodegroup_1 = {
            vpc_security_group_ids = [
                aws_security_group.sg_eks_worker_from_lb.id
            ]

            mixed_instances_policy = {
                # Add launch_template in mixed_instances_policy
                launch_template = {
                    override = [
                        {
                            instance_requirements = {
                                cpu_manufacturers                           = ["intel"]
                                instance_generations                        = ["current", "previous"]
                                spot_max_price_percentage_over_lowest_price = 100

                                vcpu_count = {
                                    min = 1
                                }

                                allowed_instance_types = ["t*", "m*"]
                            }
                        }
                    ]
                }
            }
        }
    }
}

Step 6: Configure metadata_options for IMDS access

Starting from EKS module v21, the default http_put_response_hop_limit has been changed from 2 to 1. This change can cause issues with pods accessing the Instance Metadata Service (IMDS).

Why this matters:

  • Pods running on EC2 instances need to access IMDS through the node
  • A hop limit of 1 only allows the instance itself to access IMDS
  • A hop limit of 2 is required for containers/pods to access IMDS through the node

Solution: Add metadata_options to your node groups (self-managed, EKS-managed, or both):

# main.tf - For self-managed node groups
module "eks" {
    source  = "terraform-aws-modules/eks/aws"
    version = "21.1.0"

    self_managed_node_groups = {
        my_nodegroup_1 = {
            vpc_security_group_ids = [
                aws_security_group.sg_eks_worker_from_lb.id
            ]

            # Add metadata_options to allow pods to access IMDS
            metadata_options = {
                http_put_response_hop_limit = 2  # Required for pods to access IMDS
            }

            mixed_instances_policy = {
                launch_template = {
                    override = [
                        {
                            instance_requirements = {
                                cpu_manufacturers                           = ["intel"]
                                instance_generations                        = ["current", "previous"]
                                spot_max_price_percentage_over_lowest_price = 100

                                vcpu_count = {
                                    min = 1
                                }

                                allowed_instance_types = ["t*", "m*"]
                            }
                        }
                    ]
                }
            }
        }
    }
}
# main.tf - For EKS-managed node groups
module "eks" {
    source  = "terraform-aws-modules/eks/aws"
    version = "21.1.0"

    eks_managed_node_groups = {
        my_eks_nodegroup = {
            # Add metadata_options to allow pods to access IMDS
            metadata_options = {
                http_put_response_hop_limit = 2  # Required for pods to access IMDS
            }
        }
    }
}

Important: If you don’t configure this, workloads using IAM roles for service accounts (IRSA) or accessing instance metadata may fail.

Step 7: Enable deletion protection

Starting from Terraform EKS module v21.1.0, EKS Deletion Protection is supported, so it is recommended to set the deletion_protection value as shown below. This can prevent accidental cluster deletion.

# main.tf
module "eks" {
    source  = "terraform-aws-modules/eks/aws"
    version = "21.1.0"

    name                = local.name
    kubernetes_version  = local.cluster_version
    deletion_protection = true
}

Step 8: Apply the changes

Once all changes have been applied for version 21.1.0, perform the EKS module version upgrade.

terraform init -upgrade
terraform plan
terraform apply

Conclusion

The EKS Terraform module has many features and a complex structure. If the interval between major versions becomes too wide, the upgrade work becomes exponentially more complex. In particular, if you skip several major versions, breaking changes accumulate, making migration work very difficult.

Therefore, in environments using the EKS Terraform module, it is important to keep up with upgrades whenever major versions are released. This allows you to gradually apply the changes in each upgrade step and reduce the overall maintenance burden.

References