Azure – Public Preview: Azure SQL Database free offer
Get one free serverless Azure SQL Database per subscription for the lifetime of the subscription.
Read More for the details.
Get one free serverless Azure SQL Database per subscription for the lifetime of the subscription.
Read More for the details.
You can now partition your data in analytical store using keys that are critical for your business logic to achieve better query and data load performance.
Read More for the details.
Tailor your maintenance windows to suit your schedule with Azure MySQL flexible maintenance.
Read More for the details.
Intelligent queueing and routing of inbound customer communications.
Read More for the details.
CoreLogicis a leading provider of global property information, analytics, and data-enabled solutions, and runs over 25,000 business-critical application instances on Google Cloud using container runtimes. Recently, it implemented several Google Cloud technologies that resulted in a 30% improvement in operational efficiency while increasing application performance and data availability for its business users. Read on to learn more.
CoreLogic’s property data powers real estate professionals, financial institutions, insurance carriers, government agencies, and other customers by providing access to over 5.5 billion property records spanning 50 years. In addition, CoreLogic processes $152 billion in combined tax payments for eight out of every 10 escrowed homeowners in the U.S.
To facilitate these and other operations, the company operates over 1,000 application ecosystems. For the past several years, CoreLogic has run these applications using commercial Cloud Foundry software, hosted primarily within its on-premises data centers.
As CoreLogic’s business and data growth accelerated, the deployment footprint for commercial Cloud Foundry was expanding rapidly, which was driving up licensing and operating costs. In addition, CoreLogic scaled its infrastructure to meet the demands of its expanding data processing needs. With its commitment to innovation and leadership in the data space, CoreLogic began to explore alternative solutions for its application platform needs.
After considering various cloud and on-premises options to host its applications, CoreLogic selected Google Kubernetes Engine (GKE) as its container runtimes platform of choice. Partnering closely with Google Cloud, CoreLogic modernized these application ecosystems using the GKE stack.
The new Google Cloud-based modern application platform helps CoreLogic meet the business demands of its customers. The platform is reliable, resilient, and scalable, which allows CoreLogic to provide customers with the high-quality data and analytics they need to make informed decisions. Here are a few examples of CoreLogic’s leading analytics products built on the platform:
Discovery Platform, a property analytics product that allows businesses including CoreLogic’s core markets of property and real estate technology, mortgage lenders, marketers and insurance firms to discover, integrate, analyze, and model property insights to make critical business decisions fasterClimate Risk Analytics, designed to help government agencies and enterprises measure, model and mitigate the physical risks of climate change to the real estate industryTotal Home ValueX™ (THVx) from CoreLogic, a modern automated valuation model that is a new approach to property valuation. THV is highly accurate and reliable, allowing CoreLogic to create assessments faster, and allowing the customer to make business decisions with confidence.
In short, the shift to Google Cloud goes beyond a matter of necessity and better aligns with CoreLogic’s broader focus of adopting industry standards like Kubernetes and investing in innovative technologies.
CoreLogic realized significant cost and operational efficiency gains from this implementation. Not only did the company eliminate all commercial Cloud Foundry licensing expenses but it also realized ongoing savings from cost reduction features like committed use discounts for Google Cloud infrastructure.
As part of its migration to Google Cloud, CoreLogic enabled multiple platform features to align infrastructure spending with business objectives and realize operational efficiencies through unlimited scale and optimization:
The platform processes tens of thousands of requests per second and operates on tens of terabytes of application data. With GKE auto-scaling as a key enabler, application containers automatically scale out and back in, responding quickly to any application transaction spikes.
CoreLogic uses GKE node pools to fine-tune infrastructure options tailored to specific workloads. CoreLogic’s Google Cloud infrastructure is much better suited for such workload distribution and scale out.
GKE’s fully managed control plane is backed by Google Site Reliability Engineers (SRE) and their reliability best practices. This means less operational toil and more productivity. CoreLogic teams no longer self-manage the Cloud Foundry control plane and data plane, which frees them up to do more valuable work. As a result of these changes, the company attracts and retains more productive development and engineering talent to run GKE on Google Cloud.CoreLogic also adopted GKE Workload Identity, allowing application workloads to access Google Cloud Services without managing Identity and Access Management (IAM) service accounts.
CoreLogic’s customers rely on its applications to make important decisions, so the uptime of the platform is essential to CoreLogic’s success in the data and analytics business. Automation and managed Google Cloud services are foundational components for high reliability, fewer human errors, and reduced business disruption.
CoreLogic realizes service reliability facets such as high availability and high performance using built-in GKE features, e.g., regional GKE clusters that remain fault tolerant even during complete zonal outages. Additionally, GKE’s node auto-repair and rolling updates enable zero downtime software upgrades.By defining application environments as data and utilizing Anthos’ configuration and policy management features, CoreLogic achieves improved auditability and drift management, for better operational governance.CoreLogic uses Anthos Service Mesh traffic management and telemetry to realize full observability of its application services. The platform team has unified, real-time visibility across its entire application estate. Platform operators are now empowered to troubleshoot, configure, and optimize applications using real time metrics.
CoreLogic’s ability to measure the performance of its service requests and identify slow-running services allows them to proactively resolve bottlenecks and exceed its service levels, to the benefit of its customers.
Looking ahead, CoreLogic has a tremendous opportunity to build on the next-generation GKE application platform foundation by taking advantage of the various current and emerging Google Cloud services and practices. The company may consider a zero-ops model, automating much of the infrastructure management with Google Cloud serverless technologies. And certainly, the business could benefit from additional insights powered by Google Cloud machine learning and data analytics. By exploring other Google Cloud services such AI and ML, CoreLogic is poised to build richer applications and bring more value to its clients.
Ready to modernize your own environment? Check out this guide to Google Cloud Application Modernization.
Read More for the details.
At Google Cloud, we know that developer time is precious. Finding insight from logs can be critical for troubleshooting problems, optimizing performance, and making informed decisions about your cloud infrastructure.
Today, we’re pleased to announce integrated Log Analytics charts in Cloud Logging and the ability to save your charts to a dashboard in Cloud Monitoring – both available now in preview.
With this launch, you can now create a chart for your Log Analytics query results and then save that chart to a Cloud Monitoring dashboard.
Here are a few examples to get you started.
Log Analytics can help resolve issues more quickly both through dashboards and ad hoc queries. Cloud Logging collects application logs from GCP services including Google Kubernetes Engine (GKE) by default. When something goes wrong, it’s critical to find the root cause and resolve the issue quickly.
The dashboard below provides four different Log Analytics charts relevant monitoring and troubleshooting an application. In all four of the charts, you can clearly see that an event occurred which caused a spike in volume, errors and latency along with a corresponding decrease in availability.
To dig into the issue, you can click the 3-dot menu and Explore in Log Analytics which will open the chart in the Log Analytics page.
From here, you can see that it’s the container type generating the log volume and then refine the query to identify the service. In this case, by grouping by the Google Kubernetes Enging (GKE) label, it’s easy to tell that the frontend and payment service are the services generating the most errors. Armed with this information, you can investigate the details of the logs for the services in Logs Explorer and move to the root-cause phase of troubleshooting.
Cloud Audit logs provide detailed information about events happening in your cloud environment, and more specifically, data access logs provide details about usage of cloud resources such as Cloud Storage buckets. Audit logs are useful both for troubleshooting access issues and to understand what’s happening in your projects.
Log Analytics provides a way to analyze data access logs over time or grouped in other useful ways. For example, if you need to understand which users are accessing Cloud Storage buckets as a part of an incident, Log Analytics can help you find all the user access.
The results are returned as a table.
Switching to the chart mode makes it very easy to see the users with the most access. Using the auth_permission field in the breakdown option shows the permissions for each user, making it easy to see which users accessed the most Cloud Storage objects. In this case, most of the access is from an App Engine app user, though there are individual users who have access logs as well.
By adding the chart to a dashboard, you can save the chart for later and get a clear picture of what’s happening in Cloud Storage.
To use the chart feature, first run a query in Log Analytics. By default, the query results are displayed in a table with the TABLE tab selected. To view a chart, click the CHART tab and once selected, an automatic charting configuration will be applied to the query results. You can customize the chart by selecting different chart types, dimensions, measures, measure functions and even breakdown the results by a specific field.
To save the chart to a Cloud Monitoring dashboard, use the Save chart button, which will allow you name the chart and select an existing dashboard or create a new one. Once the chart is saved to the dashboard, you can find your dashboards by selecting the dropdown next to the Save chart button. This option provides a quick list of dashboards in the project and an option to open the Cloud Monitoring dashboards list page.
During the preview, editing a dashboard isn’t available directly in Cloud Monitoring. Charts can be removed from Cloud Monitoring and added from Log Analytics. To open a Log Analytics chart in Log Analytics from a Cloud Monitoring dashboard, use the Explore in Logs Analytics option from the three-dot menu on the chart to open the same chart in Log Analytics.
These two new features in Cloud Logging can help you gain insights from your logs in new ways. But we’re not stopping there. We’re planning new chart types and full edit support from Cloud Monitoring dashboards. In the meantime, if you haven’t tried charting and dashboarding on Logs Analytics, get started today.
Read More for the details.
BigQuery is a powerful cloud data warehouse that can handle demanding workloads. BigQuery users can get the benefit of continuous improvements in performance, durability, efficiency, and scalability, without downtime and upgrades.
Today, we are pleased to announce the general availability of query queues in BigQuery.
BigQuery query queues introduces a dynamic concurrency limit and enables queueing. All BigQuery customers are enabled by default. Previously BigQuery supported a fixed concurrency limit of 100 queries per project. When the number of queries exceeded this limit, users received a quota exceeded error when attempting to submit an interactive job.
Concurrency is now calculated dynamically based on the available slot capacity and the number of queries that are currently running. While most customers will use the dynamic concurrency calculation, administrators can also choose to set a maximum concurrency target for a reservation to ensure that each query has enough slot capacity to run. This also means that queries that cannot be processed immediately are added to a queue and run as soon as resources become available, instead of failing.
Here is what happens with query queues enabled:
Using query queues
Dynamic concurrency: BigQuery dynamically determines the concurrency based on available resources and can automatically set and manage the concurrency based on reservation size and usage patterns. While the default concurrency configuration is set to zero, which enables dynamic configuration, experienced administrators can manually override this option by specifying a target concurrency limit. The admin-specified limit can’t exceed the maximum concurrency provided by available slots. The limit is not configurable by administrators for on-demand workloads.Queuing: Query queues helps to manage scenarios where peak workloads generate a sudden increase in queries that exceed the maximum concurrency limit. With queuing enabled, BigQuery can queue up to 1,000 interactive queries and 20,000 batch queries, ensuring that they are scheduled for execution rather than being terminated due to concurrency limits, as was previously the case. Users no longer need to search for idle times or periods of low usage to optimize when to submit their workload requests. BigQuery automatically runs their requests or schedules them in a queue to run as soon as the current running workloads have finished.
Key metrics and highlights
Target job concurrency: Setting a lower target_job_concurrency for a reservation increases the minimum number of slots allocated per query, which potentially results in faster or more consistent performance, particularly for complex queries. Changes to concurrency are only supported at the reservation level.Specs: Within each project, up to 1,000 interactive queries can be queued at once, and 20,000 for batch queries. Batch queries use the same resources as interactive queries.Timeouts: Users can now configure a timeout value for each query/job queue. If a query can’t start executing within the specified time, BigQuery will attempt to cancel the query/job instead of queuing it for an extended amount of time. The default timeout value is 6 hours for interactive, 24 hours for batch, and can be set at the organization or project level.
For more information, read the query queues documentation.
Read More for the details.
Cloud Storage provides simple, scalable, secure, and cost-effective object storage for customers to store, access, and manage their data. This year at Google Cloud Next ‘23, we presented a number of new Cloud Storage capabilities designed to help you serve your storage needs for mission-critical applications of all kinds, including AI/ML and data analytics workloads. Here’s a concise recap of all the new features and capabilities that we announced.
Our customers increasingly rely on object storage when running their data-intensive workloads (AI/ML, batch analytics, streaming analytics, and high performance computing). To help you with this critical and fast-growing area, we announced multiple capabilities in the areas of programmability, performance, and manageability.
Programmability – APIs for easy integration into your business
Cloud Storage Fuse (generally available): Mount and access a cloud storage bucket as a local filesystem, optimized for AI/ML and GKE workloads (learn more)Pub/Sub to Cloud Storage subscriptions (GA): Simplify streaming data ingestion to your object storage buckets with just a few clicks (learn more)Transfer for HDFS (preview in Q4 ‘23): Use Storage Transfer Service to easily transfer petabytes of Hadoop/Spark workloads to Google Cloud
Performance – Optimal price/performance for your data-intensive workloads
Cloud Storage client library transfer manager (preview): Improved read/write performance in client libraries by parallelizing uploads and downloads (learn more)Anywhere Cache (preview in Q4 ‘23): Elastically scalable zonal SSD read cache to minimize your egress bandwidth costs at predictable low latenciesgRPC API (preview in Q4 ‘23): New Cloud Storage API option that provides more efficient routing for analytics workloads, reducing overall execution timeImproved Hadoop connector (GA in Q4 ‘23): Improved write performance for Hadoop/Spark workloads on Cloud Storage via parallelization and disk buffering
Manageability – End-to-end data lifecycle tailored for ease of use and security
Event-driven transfer (GA): Listen to event notifications to trigger Storage Transfer Service (learn more)Managed folders (preview in Q4 ‘23): Prefix/folder level fine-grained access control to enable simplicity and security for data-intensive workloads
No matter what workload you leverage on object storage, areas such as security, governance, availability, scalability, and management are important. Here are the capabilities we announced for mission-critical workloads of all kinds.
Intelligent storage – Easily understand and manage your data at scale
Storage Insights datasets (preview in Q4 ‘23): Visualize and analyze your storage usage and trends within BigQuery. This supplements currently available Storage Insights inventory reports (learn more)Autoclassfor existing buckets (preview in Q4 ‘23): Automatically transition objects to different storage classes based on last access time, without needing to create a new bucket
Enterprise Security and Governance: Safeguard assets while meeting security, governance, regulation, and compliance requirements
Custom audit information (GA): Additional headers to give user, application, and job context to audit logs (learn more)Object retention lock (in preview): WORM-compliant immutable object storage by setting retain-until policies on individual objectsAssured Workloads(GA): Integrated and packaged security product controls to help meet compliance requirements in United States public sector with added support for ITAR & IL5 compliance regimes. Includes controls for data residency, data access and data sovereignty for data at rest, in use and in transit. (learn more)Custom organization policy (preview in Q4 ‘23): Custom guardrails authored and enforced by customers on GCP resources (bucket) to meet their organization governance requirements
Resilience and Scalability: Enable geo-redundancy while maintaining strict data residency controls, disaster recovery, and real-time monitoring
Soft delete (preview in Q4 ‘23): Quickly recover your data from accidental deletion during a specified retention periodReplication monitoring (preview in Q4 ‘23): Near real-time replication status is now provided for default and turbo replication in dual-region and multi-region bucketsAdditional dual-regions (GA): Flexible and performant geo-redundancy with new support for Canada and Australia regions (learn more)
To learn more about how to leverage Cloud Storage in your application, try out one of the new Google Cloud Jump Start Solution tutorials. For a list of all of Google Cloud Next ‘23 announcements, check out the Next ‘23 Wrap-Up.
Read More for the details.
Security Command Center Premium, Google Cloud’s built-in security and risk management solution, provides out-of-the-box security controls for cloud posture management and threat detection. As our customers build more complex environments with different risk profiles, cloud security teams may need to monitor for specific conditions and threats not covered by Security Command Center’s default findings and detections.
To help tailor detection and monitoring capabilities, Security Command Center now allows organizations to design their own customized security controls and threat detectors for their Google Cloud environment. For example, a security operator may need to detect if a key used to access a service account has not been rotated in the past 30 days, violating the organization’s security practices. Data security managers can detect if a CloudSQL database has been provisioned without enabling a backup, which could be required to recover from a possible ransomware attack.
New Custom Modules extend Security Command Center’s out-of-the-box posture management by allowing security managers to scan resources and policies using custom logic to identify vulnerabilities, misconfigurations, and compliance violations. The definition of a module determines the resources that will be scanned and the information to be returned.
To help security teams respond to an issue, a custom module can include the detection’s severity, details explaining what was discovered, instructions on how to fix it, and other customer-specific information needed for further security analysis.
Defining a custom module is straightforward. Go to Security Health Analytics and select Create Module. Then give your custom detector a name and choose the type of asset you want to monitor from the list of supported resources.
Modules are defined using YAML and Common Expression Language (CEL) expressions. When using the Google Cloud console to create custom modules, most of the coding is generated automatically. This lessens the need for specialized skills.
Once the module is tested to confirm that it works as expected, it will start scanning your Google Cloud environment and produce findings. Just like Security Command Center’s built-in posture management capabilities, custom modules operate in near real-time.
Security Command Center provides detection for common cloud threats, such as data exfiltration, anomalous IAM activity, brute force attacks, malicious script execution, and more. Customers can now add customized threat detections using their own detection parameters, remediation guidance, and severity designations.
For example, if a security team learns of a new command and control domain in use by an adversary, they can now add that domain to the list of domains monitored by Security Command Center. For organizations who subscribe to lists or participate in information sharing, indicators can be uploaded in bulk, along with annotations and details.
To make implementation simple, Security Command Center also includes JSON templates that can be modified with organization-specific parameters. All custom detections run in real-time mode, alongside our built-in detectors for unified security operations.
Security Command Center can help Google Cloud customers tailor their cloud posture security controls and cloud threat detection to meet specific requirements. To get started today, go to Security Command Center in the Google Cloud console.
Read More for the details.
Customers using Kubernetes at scale need consistent guardrails for how resources are used across their environments to improve security, resource management, and flexibility. Customers have told us that they need an easy way to apply and view those policy guardrails, so we launched the Policy Controller dashboard and added support for all GKE environments.
We received further feedback from Security Administrators that policy and compliance violation reports for GKE should be available alongside security insights from across their Google Cloud estate. To address this, we are excited to announce a fully managed integration to surface Policy Controller (CIS Kubernetes Benchmark v1.5.1 and PCI-DSS v3.2.1) violations in Security Command Center (SCC) .
SCC is our built-in security and risk management solution for Google Cloud. It helps discover misconfigurations, vulnerabilities, and compliance errors that can leave cloud assets exposed to attack.
Policy Controller can help you audit or enforce fully programmable policies for your GKE cluster resources that act as “guardrails,” and prevent changes from violating security, operational, or compliance controls. Policy Controller can help accelerate your application modernization efforts by helping developers release code quickly and safely. Here are some examples of policies that you can audit or enforce with Policy Controller:
All container images must be from approved repositoriesAll pods must have resource limitsResources running on my fleet of clusters should be CIS-K8s benchmark-compliantResources running on my fleet of clusters should be NIST-800 framework-compliantResources running on my fleet of clusters should be PCI-DSS benchmark-compliant
Policy Controller violations are available in SCC for all Policy Controller users. Benefits of this integration include:
Increased visibility and transparency: With SCC integration, you can get organization-wide visibility into your platform and workload violations from a single dashboard. This can lead to improved security and compliance posture and reduced risk for your organization.Ease of use: Fully managed integration means no additional build or operational overhead. It is available out-of-the-box.On-by-default: The integration will be on-by-default for all Policy Controller and Security Command Center users.Improved efficiency and decision-making: Integrated violations and compliance reporting provides data to inform decision making for taking the right steps to meet desired security, governance, and compliance standards.
For existing Policy Controller and Security Command Center users, you do not need to do anything. Policy Controller violations will automatically show up in your SCC findings tab.
View Policy Controller violations from SCC Findings tab
View Policy Controller findings from SCC Vulnerabilities tab
View Policy Controller violations from SCC Findings tab
Each Policy Controller assessment is visible alongside the other assessments SCC offers and mapped to the relevant compliance control on the SCC vulnerabilities page.
We continue to invest in building out fully managed Policy features for GKE and GKE-Enterprise, focusing on ease-of-use, out-of-the-box content, and a more integrated Google Cloud experience. To get started with Policy Controller, simply install Policy Controller and try applying a policy bundle to audit your fleet of clusters against a standard such as the CIS Kubernetes benchmark. To get started with SCC today, visit the Google Cloud console and our quickstart guide.
Read More for the details.
Today, AWS Lake Formation announces the general availability of Hybrid Access Mode for AWS Glue Data Catalog. This feature provides you the flexibility to selectively enable Lake Formation for databases and tables in your AWS Glue Data Catalog. Before this launch, you had to move all existing users of a table into Lake Formation in a single step, which required a certain amount of coordination among data owners and data consumers. With the Hybrid Access Mode, you now have an incremental path wherein you can enable Lake Formation for a specific set of users without interrupting other existing users or workloads.
Read More for the details.
AWS HealthOmics is now available in Israel (Tel Aviv) Region. AWS HealthOmics is a fully managed service that helps healthcare and life science organizations build at-scale to store, query, and analyze genomic, transcriptomic, and other omics data. By removing the undifferentiated heavy lifting, customers can generate deeper insights from omics data to improve health and advance scientific discoveries.
Read More for the details.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) Hpc7g instances are available in Asia Pacific (Tokyo), Europe (Ireland), and the AWS GovCloud (US-West) Regions. Amazon EC2 Hpc7g instances are powered by AWS Graviton processors, which are custom Arm-based processors designed by AWS.
Read More for the details.
Today, Amazon DynamoDB announces the general availability of incremental export to S3, that allows you to export only the data that has changed within a specified time interval. With incremental exports, you can now export data that was inserted, updated or deleted, in small increments. You can export changed data ranging from a few megabytes to terabytes with a few clicks in the AWS Management Console, an API call, or the AWS Command Line Interface. Choose a DynamoDB table that has point-in-time-recovery enabled, specify an export time period for which you want incremental data, choose your target Amazon S3 bucket, and export.
Read More for the details.
Amazon Connect Contact Lens now supports a new permission to provide agents with access to only the contacts that they handled, within the contact search page in the Amazon Connect UI. Today, Amazon Connect has permissions that enable contact center managers to access contacts handled by agents in their teams and evaluate agent performance. With this launch, agents can securely search for their own contacts and review their recordings and transcripts alongside performance evaluations submitted by managers.
Read More for the details.
Amazon EMR Serverless is a serverless option that helps data analysts and engineers to run open-source big data analytics frameworks such as Apache Spark and Apache Hive without configuring, managing, and scaling clusters or servers. We are happy to announce that starting today, you can set default configurations at the application level, allowing you to maintain consistent settings for all Spark and Hive jobs submitted under the same application.
Read More for the details.
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 3.5.1 for new and existing clusters. Apache Kafka 3.5.1 includes several bug fixes and new features that improve performance. Key features include the introduction of new rack-aware partition assignment for consumers. Amazon MSK will continue to use and manage Zookeeper for quorum management in this release. For a complete list of improvements and bug fixes, see the Apache Kafka release notes for 3.5.1.
Read More for the details.
Starting today, AWS Global Accelerator supports application endpoints in four additional AWS Regions – Asia Pacific (Melbourne), Europe (Spain), Europe (Zurich) and Israel (Tel Aviv), expanding the number of supported AWS Regions to twenty-eight.
Read More for the details.
We’re excited to share that Gartner has recognized Google as a Leader in the 2023 Gartner® Magic Quadrant™ for Container Management with Google Cloud being evaluated. Google was positioned highest in Ability to Execute of all vendors evaluated and we believe this validates the success of our mission to be the best place for customers anywhere to run containerized workloads. Gartner predicts that “by 2027, more than 90% of global organizations will be running containerized applications in production, which is a significant increase from fewer than 40% in 2021.” From our perspective, containers power today’s most innovative apps and businesses — and will unlock even greater business transformation and innovation in the years ahead.
Our journey began in 2014 when we introduced Kubernetes and launched Google Kubernetes Engine (GKE), the first managed Kubernetes service in the world. GKE is the most scalable leading Kubernetes service available in the industry today. In 2019, we launched Cloud Run, the first platform to combine the benefits of containers and serverless. Today, Cloud Run provides one of the leading developer experiences of all cloud providers. In 2019, we also extended GKE to hybrid and multi-cloud environments with Anthos, and introduced Autopilot mode in GKE in 2021. This year, we expanded the reach of our container management platform to Google Distributed Cloud.
Wherever our customers build and run containers, from Google Cloud to other clouds to the data center and the edge, we aim to deliver the most simple, comprehensive, secure, and reliable container platform for all workloads.
Today, we continue our mission to help our customers modernize and transform their businesses with containers. At our flagship Google Cloud Next ‘23 conference, we introduced powerful new enhancements to our container management products, what we call “the next evolution of container platforms.” Key innovations include:
GKE Enterprise, a new premium edition of GKE. GKE Enterprise builds on Google Cloud’s leadership in containers and Kubernetes, bringing together the best of GKE and Anthos into an integrated and intuitive container platform, with a unified console experience. Plus, GKE Enterprise includes hybrid and multi-cloud support, so you can run container workloads anywhere.Cloud TPU v5e support in GKE, for organizations developing the next generation of AI applications. We also announced general availability of both the A3 VM with NVIDIA H100 GPU and Cloud Storage Fuse for GKE.Duet AI in GKE and Cloud Run leverages the power of generative AI to drive productivity and velocity for customer container platform teams. With Duet AI specifically trained on our container management product documentation, customers can cut down on the time it takes to run and optimize containerized applications.
Check out all the key container management innovations announced at Next ‘23 — from Kubernetes to AI — and how Google Cloud helps our customers improve efficiency, reliability, and security for their most important containerized workloads.
We’ve come a long way since 2014, and we are thrilled to be recognized as a Leader for Container Management by Gartner. We are also grateful to our customers around the world who’ve placed their trust in our container management products. Here’s a small sampling:
The BBC uses Cloud Run to handle dramatic traffic spikes, scale from 150-200 container instances to over 1,000 in under a minutes, and entertain over 498 million adults per week.Carrefour depends on both GKE and Cloud Run together to run new ecommerce apps.Equifax depends on GKE as the foundation of its global data fabric, analyzing 250 billion data points across 8 geographies and helping its customers around the world live their financial best.The 15 largest GKE customers are already using it to power their AI workloads. In fact, over the last year, the use of GPUs with GKE has doubled. Hear how grammarly, Lightricks, and character.ai work with GKE to build large AI models.
We are honored to be a Leader in the 2023 Gartner® Magic Quadrant™ for Container Management, and look forward to continuing to innovate and partner with customers on their digital transformation journeys.
Download the complimentary copy of the report: 2023 Gartner® Magic Quadrant™ for Container Management. Learn more about how organizations are transforming their businesses with Containers in Google Cloud.
Read More for the details.
We’re thrilled to share that Azure API Center is now open for everyone to try during our ungated public preview!
Read More for the details.