GCP – A new flexible, simplified, and more secure way to configure GKE cluster connectivity
Google Kubernetes Engine (GKE) provides users with a lot of options when it comes to configuring their cluster networks. But with today’s highly dynamic environments, GKE platform operators tell us that they want more flexibility when it comes to changing up their configurations. To help, today we are excited to announce a set of features and capabilities designed to make GKE cluster and control-plane networking more flexible and easier to configure.
Specifically, we’ve decoupled GKE control-plane access from node-pool IP configuration, providing you with granular control over each aspect. Furthermore, we’ve introduced enhancements to each sub-component, including:
-
Cluster control-plane access
-
Added a DNS-based approach to accessing the control plane. In addition, you can now enable or disable IP-based or DNS-based access to control-plane endpoints at any time.
-
With IP-based access control, you can now lock down the private IP endpoint to selected RFC-1918 addresses (via authorized networks) for both VPC Peering-based and Private Service Connect-based clusters.
Node pool and IP address flexibility
-
Each node-pool now has its own configuration, and you can now detach or attach a public IP for each node-pool independently at any time during the node-pool’s lifecycle.
-
You can now change a cluster’s default configuration of attaching a public IP on the newly provisioned node pools at any time. This configuration change doesn’t require you to re-create your cluster.
Regardless of how you configure a cluster’s control-plane access, or attach and detach a public IP from a node pool, the traffic between nodes to the cluster’s control plane always remains private, no matter what.
With these new changes, going forward:
-
GKE platform admins and operators can now easily switch between less restrictive networking configurations (e.g., control plane and/or nodes accessible from the internet) and the most restrictive configurations, where only authorized users can access the control plane, and nodes are not exposed to the internet. The decision to make a cluster public or private is no longer immutable, giving customers more flexibility without having to make upfront decisions.
-
There are more ways to connect to the GKE control plane. In addition to IP-based access, we now introduce DNS-based access to the control plane. You can use IAM and authentication-based policies to add policy-based, dynamic security to access the GKE control plane.
- aside_block
- <ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud containers and Kubernetes’), (‘body’, <wagtail.rich_text.RichText object at 0x3e9c95d658e0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectpath=/marketplace/product/google/container.googleapis.com’), (‘image’, None)])]>
Previous challenges
Due to the complexities and varieties of customers’ workloads and use cases, it is important to provide a simple and flexible way for customers to configure and operate the connectivity to GKE control plane and GKE nodes.
Control-plane connectivity and node-pool configuration are a key part of configuring GKE. We’ve continuously enhanced GKE’s networking capabilities to address customer concerns, providing more options for secure and flexible connectivity, including capabilities such as private clusters, VPC Peering-based connectivity, Private Service Connect-based connectivity, and private/public node pools.
While there have been a lot of improvements in configuration, usability and secure connectivity, there were still certain configuration challenges when it comes to complexity, usability and scale, such as:
-
Inflexible GKE control plane and node access configuration: GKE customers need to make an upfront one-way decision whether to create a private or public cluster during the cluster creation process. This configuration could not be changed unless the cluster is re-created.
-
The node pool network IP/ type configuration could not be changed once a cluster was created.
-
Confusing terms such as Public / Private clusters, creating confusion as to whether the configuration is for control-plane access or node-pool configuration.
Benefits of the new features
With these changes to GKE networking, we hope you will see benefits in the following areas.
Flexibility:
-
Clusters now have unified and flexible configuration. Clusters with or without external endpoints all share the same architecture and support the same functionality. You can secure access to clusters based on controls and best practices that meet your needs. All communication between the nodes in your cluster and the control plane use a private internal IP address.
-
You can change the control plane access and cluster node configuration settings at any time without having to re-create the cluster.
Security:
-
DNS-based endpoints with VPC Service Controls provide a multi-layer security model that protects your cluster against unauthorized networks as well as from unauthorized identities accessing the control plane. VPC Service Controls integrate with Cloud Audit Logs to monitor access to the control plane.
-
Private nodes and the workloads running on them are not directly accessible from the public internet, significantly reducing the potential for external attacks targeting your workloads.
-
You can block control plane access from Google Cloud external IP addresses or from external IP addresses to fully isolate the cluster control plane and reduce exposure to potential security threats.
Compliance: If you work in an industry with strict data-access and storage regulations, private nodes help ensure that sensitive data remains within your private network.
Control: Private nodes give you granular control over how traffic flows in and out of your cluster. You can configure firewall rules and network policies to allow only authorized communication. If you operate across a multi-cloud environment, private nodes can help you establish secure and controlled communication between different environments.
Getting started
Accessing the cluster control plane
There are now several ways to access a cluster’s control plane: via traditional public or private IP-based endpoints, and the new DNS-based endpoint. Whereas IP-based endpoints entail tedious IP address configuration (including static authorized network configuration, allowing private accessing from any regions, etc.), DNS-based endpoints offer a simplified, IAM policy-based, dynamic, flexible and more secure way to access a cluster’s control plane.
With these changes, you can now configure the cluster’s control plane to be reachable by all three endpoints (DNS-based, public or private IP-based) at same time, locking the cluster down to the granularity of a single endpoint in any permutation that you would like. You can apply your desired configuration at cluster creation time or adjust it later.
Examples:
Enable DNS-based endpoint only (preferable):
- code_block
- <ListValue: [StructValue([(‘code’, ‘gcloud container clusters create my-cluster \rn t–enable-dns-access \rnt–no-enable-ip-access’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9c95d65880>)])]>
Enable access over all three endpoints:
- code_block
- <ListValue: [StructValue([(‘code’, ‘gcloud container clusters create my-cluster \rn t–enable-dns-access \rnt–enable-ip-access \rn –no-enable-private-endpoint’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9c961ab070>)])]>
Enable access over the DNS-based endpoint and Private IP with restricted access from small ranges of RFC1918 address :
- code_block
- <ListValue: [StructValue([(‘code’, ‘gcloud container clusters create my-cluster \rn t–enable-dns-access \rnt–enable-ip-access \rnt–enable-private-endpoint \rn –enable-master-authorized-networks \rn –master-authorized-networks=10.0.3.0/28 \rn –enable-authorized-networks-on-private-endpoint’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9c961ab040>)])]>
Node-pools
Here’s how to configure access for GKE node-pools.
GKE Standard Mode:
In GKE Standard mode of operation, a private IP is always attached to every node no matter what. This private IP is used for private connectivity to the cluster’s control plane.
You can add or remove a public IP to all nodes in a node-pool at node-pool creation time. This configuration can be performed on each node-pool independently.
- code_block
- <ListValue: [StructValue([(‘code’, ‘gcloud container node-pool create my-private-node-pool \rnt–cluster my-cluster \rnt–enable-private-nodes’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9c961ab100>)])]>
You can also add or remove a public IP after the node-pool has been created:
- code_block
- <ListValue: [StructValue([(‘code’, ‘gcloud container node-pool update my-private-node-pool \rnt–cluster my-cluster \rnt–enable-private-nodes’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9c961ab190>)])]>
Each cluster has a default behavior flag that’s used at node-pool creation time if the flag is not explicitly set beforehand during node-pool creation time.
- code_block
- <ListValue: [StructValue([(‘code’, ‘gcloud container clusters update my-cluster \rnt–enable-private-nodes’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9c961ab280>)])]>
Note: Mutating a cluster’s default state does not change behavior of existing node pools. The new state is used only when a new node-pool is being created.
GKE Autopilot mode of operation:
All workloads running on nodes with or without a public IP are based on the cluster’s default behavior. You can override the cluster’s default behavior on each workload independently by adding the following nodeSelector to your Pod specification:
- code_block
- <ListValue: [StructValue([(‘code’, ‘spec:rn nodeSelector:rn cloud.google.com/private-node=true|false’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e9c961ab1c0>)])]>
However, overriding a cluster’s default behavior causes all workloads for which behavior hasn’t been explicitly set to be rescheduled), to run on nodes that match the cluster’s default behavior.
Conclusion
Given the complexity and variety of workloads that run on GKE, it’s important to have a simple and flexible way to configure and operate the connectivity to the GKE control plane and nodes. We hope these enhancements to GKE control-plane connectivity and node-pool configuration will bring new levels of flexibility and simplicity to GKE operations. For further details and documentation, please see:
Read More for the details.