GCP – Try the new Managed Service for Apache Kafka and take cluster management off your todo list
You can learn a lot from running a distributed storage system like Apache Kafka™. You start by designing and setting up security and networking for your cluster and then develop processes and tools for reliable operation and upgrades. All the while, you negotiate scaling up for new traffic and down to save money. This is a rewarding, but difficult adventure.
We are here to offer a shortcut: the new Google Cloud Managed Service for Apache Kafka. This service takes care of the high-stakes, sometimes tedious work of running infrastructure. A preview is available right now in your Google Cloud project.
This service is an alternative to Cloud Pub/Sub. Pub/Sub will make sense for applications where the utmost ease of maintenance or a global resource model are the deciding factors. However, if you are looking for a familiar technology that is portable across environments, the Managed Service for Apache Kafka is for you.
If you are considering running your own Apache Kafka infrastructure, we think the service will leave you with more time for other adventures on your todo list. Read on to see what this new service can do for you.
What is Apache Kafka?
First, a brief detour into how Apache Kafka helps you build scalable software and businesses. On a technical level, Apache Kafka is a scalable storage system used to share events among applications. If you need to build a service that reacts to an event, you can get that event from Kafka. This is much easier than integrating directly with the application that produced the event. This decoupling makes it possible to build complicated software that is both durable and scalable. This is because Kafka, unlike other storage systems, scales horizontally to gigabytes of throughput per second. Unlike object storage, Kafka delivers individual records within microseconds, rather than in large, immutable batches.
Where is such a system useful? If you work in retail, you likely have several services that process events like “user X clicked on product Y.” For example, you may want to build a recommendation system or show the user a history of products they recently viewed. And of course, the data science team will want a full history of user interactions. Or if you work in manufacturing, your events may look like “sensor X reported measurement Y.” What you do with them may range from creating operational monitoring dashboards to developing and running early failure detection algorithms. For financial service companies, stock sales and transactions can feed into fraud detection pipelines and balance counters.
How does the Managed Service for Apache Kafka help?
Let’s take a look at what it takes to actually run an Apache Kafka cluster with Managed Service for Apache Kafka. Or to be more specific, a secure, scalable cluster.
The first step is getting access to the service. If you are a project administrator, you already have it. It is included with your existing account, terms of service, and covered by your payment method or contract.
The second step is deciding how to secure the cluster. With Managed Service for Apache Kafka, this is also built in. Every cluster allows only encrypted, authenticated connections. Authentication supports all identities supported by Google IAM, including service accounts and Workforce Identity Federation accounts. You don’t need to manage these identities in the Kafka clusters. Clients can authenticate with OAuth, which automates handling of credentials on Google Cloud or username/password (SASL_PLAIN) authentication for legacy applications. And of course, data is encrypted at rest, including support for CMEK.
Next comes network design, which Managed Service for Apache Kafka also entirely automates. You simply decide which VPC networks your clients will use to access the service, and the service does the rest. You can expose a cluster in up to 10 different VPC networks. This means you can separate different clients into different networks and have cross-region access to the cluster. VPC pairing is not required. In each network, the service creates private IP addresses for the brokers and the bootstrap server. It then sets up DNS entries so that all the clients need to know is the bootstrap URL for the cluster. There is, by the way, just one bootstrap URL because, yes, there is a load balancer set up in each cluster.
Now comes the sizing. What broker machine type do you use, how many, and how do you set up the disks? This is simple: the service fully manages brokers and storage. You are responsible for managing two settings: the number of vCPUs and the size of RAM for the cluster. Once you set these, broker management is automated. This includes horizontal and vertical scaling of brokers and storage. Storage is tiered, including minimal local storage for buffering, and effectively unlimited remote storage. This means that you can store as much Kafka data as you like and not worry about running out of disk space.
Now you are ready to use the service. Since the service runs open source Apache Kafka, clients that use the Kafka protocols will work the way you expect. You get the important operational metrics in Cloud Monitoring and logs in Cloud Logging. Because the service is a Google Cloud API, it comes with client libraries, including Terraform, a UI, and CLI.
There is one last pleasant surprise waiting once the cluster is running at full tilt: automatic upgrades. The service automatically keeps all the software up-to-date. No restarts required.
Try it for yourself: get publishing in about an hour
In short, Managed Service for Apache Kafka reduces the work of running an Apache Kafka cluster in Google Cloud to picking a subnet and keeping an eye on RAM and CPU utilization. Give it a try! You can get started in Google Cloud console or by following this quickstart. Let us know if we missed anything by reaching out to your account team or by emailing kafka-hotline@google.com.
Read More for the details.