vpc
AckeeCZ/vpc/gke
GCP GKE module, provision GEK cluster with underlaying infrastructure
Terraform Google Kubernetes Engine VPC-native module Terraform module for provisioning of GKE cluster with VPC-native nodes and support for private networking (no public IP addresses) Private networking Private GKE cluster creation is divided into few parts: Private nodes Turned on with parameter private, all GKE nodes are created without public and thus without route to internet Cloud NAT gateway and Cloud Router Creating GKE cluster with private nodes means they have not internet connection. Creating of NAT GW is no longer part of this module. You can use upstream Google Terraform module like this : `` resource "google_compute_address" "outgoing_traffic_europe_west3" { name = "nat-external-address-europe-west3" region = var.region project = var.project } module "cloud-nat" { source = "te
| Name | Type | Description | Default |
|---|---|---|---|
| vault_secret_path | string | Path to secret in local vault, used mainly to save gke credentials | required |
| project | string | GCP project ID | required |
| ci_sa_email | string | Email of Service Account used for CI deploys | "gitlab@infrastruktura-1307.iam.gservice |
| image_streaming | bool | Enable GKE image streaming feature. | false |
| location | string | Default GCP zone | "europe-west3-c" |
| auto_upgrade | bool | Allow auto upgrade of node pool | false |
| traefik_version | string | Version number of helm chart | "1.7.2" |
| traefik_custom_values | list(object({ name = stri | Traefik Helm chart custom values list | [
{
"name": "ssl.enabled",
"va |
| private_master_subnet | string | Subnet for private GKE master. There will be peering routed to VPC created with | "172.16.0.0/28" |
| managed_prometheus_enable | bool | Configuration for Managed Service for Prometheus. | false |
| cluster_admins | list(string) | List of users granted admin roles inside cluster | [] |
| region | string | GCP region | "europe-west3" |
| services_ipv4_cidr_block | string | Optional IP address range of the services IPs in this cluster. Set to blank to h | "" |
| node_pools | map(any) | Definition of the node pools, by default uses only ackee_pool | {} |
| monitoring_config_enable_components | list(string) | The GKE components exposing logs. SYSTEM_COMPONENTS and in beta provider, both S | null |
| enable_cert_manager | bool | Enable cert-manager helm chart | false |
| cert_manager_version | string | Version number of helm chart | "v1.6.1" |
| node_pool_location_policy | string | Node pool load balancing location policy | "BALANCED" |
| enable_traefik | bool | Enable traefik helm chart for VPC | false |
| network | string | Name of VPC network we are deploying to | "default" |
| maintenance_window_time | string | Time when the maintenance window begins. | "01:00" |
| dns_nodelocal_cache | bool | Enable NodeLocal DNS Cache. This is disruptive operation. All cluster nodes are | false |
| … and 7 more inputs | |||
cluster_ipv4_cidr — The IP address range of the Kubernetes pods in this cluster in CIDR notationendpoint — Cluster control plane endpointnode_pools — List of node pools associated with this clusterclient_certificate — Client certificate used kubeconfigclient_key — Client key used kubeconfigcluster_ca_certificate — Client ca used kubeconfigaccess_token — Client access token used kubeconfigAzure landing zones Terraform module
Terraform supermodule for the Terraform platform engineering for Azure
Terraform module to deploy landing zone subscriptions (and much more) in Azure
Terraform Module to define a consistent naming convention by (namespace, stage,