Skip to content

GKE Cluster Created with Incorrect Release Channel #1620

@jarpat

Description

@jarpat

TL;DR

When using the private cluster module and not specifying the release_channel variable (default value is null) intermittently a cluster gets created that is part of the REGULAR release channel when I expected it to be in the UNSPECIFIED channel

Expected behavior

When release_channel is not defined I expect my cluster to be in the UNSPECIFIED channel

According to the doc: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/modules/private-cluster

release_channel: The release channel of this cluster. Accepted values are UNSPECIFIED, RAPID, REGULAR and STABLE. Defaults to UNSPECIFIED.

Observed behavior

The cluster is in the REGULAR release channel

I recreated the issue with the simple_regional_private example from this project below

Terraform Configuration

# I recreated the issue using the Simple Regional Cluster example from this project 
# with a minor modification, I specified kubernetes_version = "1.25"

# https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/simple_regional_private

locals {
  cluster_type = "simple-regional-private"
}

data "google_client_config" "default" {}

provider "kubernetes" {
  host                   = "https://${module.gke.endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(module.gke.ca_certificate)
}

data "google_compute_subnetwork" "subnetwork" {
  name    = var.subnetwork
  project = var.project_id
  region  = var.region
}

module "gke" {
  source                    = "../../modules/private-cluster/"
  project_id                = var.project_id
  name                      = "${local.cluster_type}-cluster${var.cluster_name_suffix}"
  regional                  = true
  region                    = var.region
  network                   = var.network
  subnetwork                = var.subnetwork
  ip_range_pods             = var.ip_range_pods
  ip_range_services         = var.ip_range_services
  create_service_account    = false
  service_account           = var.compute_engine_service_account
  enable_private_endpoint   = true
  enable_private_nodes      = true
  master_ipv4_cidr_block    = "172.16.0.0/28"
  default_max_pods_per_node = 20
  remove_default_node_pool  = true
  kubernetes_version = "1.25"

  node_pools = [
    {
      name              = "pool-01"
      min_count         = 1
      max_count         = 100
      local_ssd_count   = 0
      disk_size_gb      = 100
      disk_type         = "pd-standard"
      auto_repair       = true
      auto_upgrade      = true
      service_account   = var.compute_engine_service_account
      preemptible       = false
      max_pods_per_node = 12
    },
  ]

  master_authorized_networks = [
    {
      cidr_block   = data.google_compute_subnetwork.subnetwork.ip_cidr_range
      display_name = "VPC"
    },
  ]
}

Terraform Version

$ terraform version
Terraform v1.0.0
on linux_amd64
+ provider registry.terraform.io/hashicorp/google v4.63.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.20.0
+ provider registry.terraform.io/hashicorp/random v3.5.1

Additional information

The issue seems to be intermittent, sometimes the resulting cluster will be in the UNSPECIFIED channel and other times the REGULAR channel

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions