Azure – Retirement: Azure Database Migration Service (classic) – SQL Server scenarios deprecation
Azure Database Migration Service (classic) – SQL Server scenarios will not be supported after March 31, 2026.
Read More for the details.
Azure Database Migration Service (classic) – SQL Server scenarios will not be supported after March 31, 2026.
Read More for the details.
Azure Storage PHP client libraries will be retired on 17 March 2024
Read More for the details.
Transition to Prometheus recommended alert rules (preview) before 14 March 2026.
Read More for the details.
Migrate to GitOps Extension for Flux v2 by 24 May 2025
Read More for the details.
Transition Batch pools to the Simplified compute node communication model
Read More for the details.
Effective 15 March 2026, adding playbooks within Microsoft Sentinel analytics rules creation/edit will be deprecated.
Read More for the details.
Remove references in your autoscale formula for the service-defined variables
Read More for the details.
AWS Backup customers can now back up and restore their virtual machines running on VMware vSphere 8. Virtual machines compatible with ESX 3.x and later can use vSphere 8.0. Additionally, AWS Backup gateway now supports backups and restores of virtual machines configured with multiple vNICs (virtual network interface cards).
Read More for the details.
AWS Database Migration Service (AWS DMS) has expanded support of Amazon Simple Storage Service (Amazon S3) as a target by adding the ability to create an AWS Glue Data Catalog from the Amazon S3 data files generated by AWS DMS. With this integration, you no longer need to run a crawler or additional extract, transform, and load (ETL) jobs to create the catalog, and the Amazon S3 data is ready to be queried through other AWS services such as Amazon Athena.
Read More for the details.
AWS Transfer Family, a service that provides fully managed file transfers for Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (EFS), in now available in the AWS Middle East (UAE) Region.
Read More for the details.
AWS Migration Hub Strategy Recommendations now allows you to analyze application binaries, detect incompatibilities, and identify viable modernization pathways such as refactoring and replatforming without assessing your application source code. Strategy Recommendations can now help inspect web application binaries on Windows and Linux Servers and generate incompatibility report that you can leverage to implement viable modernization pathways.
Read More for the details.
AWS Database Migration Service (AWS DMS) now supports validation to ensure that data is migrated accurately to S3.
Read More for the details.
The promise of digital transformation is being challenged by the increasingly disruptive threat landscape. More sophisticated and capable adversaries have proliferated as nation-states pivot from cyber-espionage to compromise of private industry for financial gain. Their tactics shift and evolve rapidly as workloads and workforces become distributed and enterprises’ attack surface grows. And the security talent needed to help remains scarce and stubbornly grounded in ineffective, toil-based legacy approaches and tooling.
Join Mandiant and Google Cloud together for the first time at RSA Conference 2023. We’re excited to bring our joint capabilities, products, and expertise together, so you can defend your organization against today’s threats with:
Unique, up-to-date, and actionable threat intelligence that can only come from ongoing, frontline engagement with the world’s most sophisticated and dangerous adversaries.
Comprehensive visibility across your attack surface and infrastructure, delivered by a modern security operations platform that empowers you to rapidly detect, investigate, respond to, and remediate security incidents in your environment.
A secure-by-design, secure-by-default cloud platform to drive your organization’s digital transformation.
Proven expertise and assistance to extend your team with the help you need – before, during, and after security incidents.
Join us for insightful keynotes and sessions, new technology demos, and one-to-one conversations with Mandiant | Google Cloud experts at our booth N6058.
Join Sandra Joyce, VP, Mandiant Intelligence at Google Cloud and her team for an exclusive, invitation-only threat intelligence briefing Monday, April 24, from 4 p.m. until 5 p.m. to hear about the current threat landscape and engage with the analyst team that conducts the research on the frontlines.
Join keynote speaker Heather Adkins, VP of Security Engineering at Google Cloud at the Elevate Keynote and Breakfast on Wednesday, April 26, from 8:30 a.m. until 10 a.m. to hear about her more than 16 years in security at Google, and share insights from “Hacking Google,” where she built resilience within herself and her team while engineering world-class security.
Visit booth N6058 for a personal demonstration of our security solutions or book one-on-one time with a Mandiant and Google Cloud cybersecurity expert.
Reserve your spot at the Mandiant and Google Cloud Happy Hours
Taco Tuesday – Tuesday, April 25 from 6 p.m. until 7:30 p.m.
Wine Down Wednesday – Wednesday, April 26, from 6 p.m. until 7:30 p.m.
Come join us at the Mandiant and Google Cloud booth N6058 to hear from a variety of experts and special guests on topics ranging from Autonomic Security Operations, to threat hunting with VirusTotal, to the latest Mandiant threat landscape trends, to how to protect your apps and APIs from fraud and abuse. You can also hear from our thought leaders during sessions including (all times Pacific Standard Time):
The State of Cybersecurity – Year in Review
Speaker: Kevin Mandia, CEO of Mandiant, Google Cloud
Wednesday, Apr. 26, 2023 | 11:10 a.m.
The Megatrends driving cloud adoption – and improving security – for all
Speaker: Phil Venables, Chief Information Security Officer, Google Cloud
Wednesday, Apr. 26, 2023 | 8:30 a.m.
Intelligently managing the geopolitics and security interplay
Speaker: John Miller, Mandiant Senior Manager, Google Cloud & Shanyn Ronis, Mandiant Senior Manager, Google Cloud
Wednesday, Apr. 26, 2023 | 9:40 a.m.
The World in Crisis: Prepare for Extreme Events via Supply Chain Resilience
Panelists: Erin Joe, Director, Mandiant Strategy and Alliances, Office of the CISO, Google Cloud; Andrea Little Limbago, PhD, Senior Vice President, Research & Analysis, Interos; Edna Conway, VP, Chief Security & Risk Officer, Cloud Infrastructure, Microsoft
Monday, Apr. 24, 2023 | 2:20 p.m.
How Do You Trust Open Source Software?
Speakers: Brian Russell, Product Manager, Google & Naveen Srinivasan, OpenSSF Scorecard Maintainer
Tuesday, Apr. 25, 2023 | 2:25 p.m.
Passkeys: The Good, the Bad and the Ugly
Speaker: Christiaan Brand, Product Manager, Security & Identity, Google
Tuesday, Apr. 25, 2023 | 9:40 a.m.
Ransomware 101: Get Smart Understanding Real Attacks
Speaker: Jibran Ilyas, Consulting Leader, Mandiant | Google Cloud
Tuesday, Apr. 25, 2023 | 2:25 p.m.
Defenders, Unite! The JCDC and a Whole of Nation Approach to Cybersecurity
Panelists: Maria Probst, Cyber Operational Planner, CISA/JCDC; Jason Barrett, Deputy Director, Cyber Threat Intelligence Integration Center (CTIIC), ODNI; Stephanie Kiel, Senior Policy Manager, Cloud Security, Government Affairs and Public Policy, Google; Robert Sheldon, Director of Public Policy & Strategy, CrowdStrike
As your security transformation partner, Mandiant | Google Cloud can help you:
Understand threat actors and their potential attack vectors
Detect, investigate and respond to threats faster
Build on a secure-by-design, secure-by-default cloud platform
Extend your team with the expertise you need – before, during, and after a security incident
And so much more…
Come experience Mandiant and Google Cloud frontline intelligence and cloud innovationat RSAC booth N6058, and sign up for one-to-one conversations with Mandiant and Google Cloud experts at our booth.
We look forward to seeing you at the RSA Conference and helping you defend your most critical data, applications, and communications.
Read More for the details.
You can now manage the TLS certificates associated with Listeners through Azure portal.
Read More for the details.
Amazon CloudWatch Logs now supports ingesting enriched metadata introduced in Amazon Virtual Private Cloud (Amazon VPC) flow logs as part of versions 3 to 5 additional to the default fields. This launch includes metadata fields that provide more insights about the network interface, traffic type, and the path of egress traffic to the destination.
Read More for the details.
Customers love the way BigQuery makes it easy for them to do hard things — from BigQuery Machine Learning (BQML) SQL turning data analysts into data scientists, to rich text analytics using the SEARCH function that unlocks ad-hoc text searches on unstructured data. A key reason for BigQuery’s ease of use is its underlying serverless architecture, which supercharges your analytical queries while making them run faster over time, all without changing a single line of SQL.
In this blog, we lift the curtain and share the magic behind BigQuery’s serverless architecture, such as storage and query optimizations as well as ecosystem improvements, and how they enable customers to work without limits in BigQuery to run their data analytics, data engineering and data science workloads.
BigQuery stores table data in a columnar file store called Capacitor. These Capacitor files initially had a fixed file size, on the order of hundreds of megabytes, to support BigQuery customers’ large data sets. The larger file sizes enabled fast and efficient querying of petabyte-scale data by reducing the number of files a query had to scan. But as customers moving from traditional data warehouses started bringing in smaller data sets — on the order of gigabytes and terabytes — the default “big” file sizes were no longer the optimal form factor for these smaller tables. Recognizing that the solution would need to scale for users with big and smaller query workloads, the BigQuery team came up with the concept of adaptive file sizing for Capacitor files to improve small query performance.
The BigQuery team developed an adaptive algorithm to dynamically assign the appropriate file size, ranging from tens to hundreds of megabytes, to new tables being created in BigQuery storage. For existing tables, the BigQuery team added a background process to gradually migrate existing “fixed” file size tables into adaptive tables, to migrate customers’ existing tables to the performance-efficient adaptive tables. Today, the background Capacitor process continues to scan the growth of all tables and dynamically resizes them to ensure optimal performance.
“We have seen a greater than 90% reduction in the number of analytic queries in production that take more than one minute to run.” – Emily Pearson, Associate Director, Data Access and Visualization Platforms, Wayfair
Reading from and writing to BigQuery tables maintained in storage files can become inefficient quickly if workloads had to scan all the files for every table. BigQuery, like most large data processing systems, has developed a rich store of information on the file contents, which is stored in the header of each Capacitor file. This information about data, called metadata, allows query planning, streaming and batch ingest, transaction processing and other read-write processes in BigQuery to quickly identify the relevant files within storage on which to perform the necessary operations, without wasting time reading non-relevant data files.
But while reading metadata for small tables is relatively simple and fast, large (petabyte-scale) fact tables can generate millions of metadata entries. For these queries to generate results quickly the query optimizer needs a highly performant metadata storage system.
Based on the concepts proposed in their 2021 VLDB paper, “Big Metadata: When Metadata is BigData,” the BigQuery team developed a distributed metadata system, called CMETA, that features fine-grained column and block-level metadata that is capable of supporting very large tables and that is organized and accessible as a system table. When the query optimizer receives a query, it rewrites the query to apply a semi-join (WHERE EXISTS or WHERE IN) with the CMETA system tables. By adding the metadata data lookup to the query predicate, the query optimizer dramatically increases the efficiency of the query.
In addition to managing metadata for BigQuery’s Capacitor-based storage, CMETA also extends to external tables through BigLake, improving the performance of lookups of large numbers of Hive partitioned tables.
The results shared in the VLDB paper demonstrate that query runtimes are accelerated by 5× to 10× for queries on tables ranging from 100GB to 10TB using the CMETA metadata system.
BigQuery has a built-in storage optimizer that continuously analyzes and optimizes data stored in storage files within Capacitor using various techniques:
Compact and Coalesce: BigQuery supports fast INSERTs using SQL or API interfaces. When data is initially inserted into tables, depending on the size of the inserts, there may be too many small files created. The Storage Optimizer merges many of these individual files into one, allowing efficient reading of table data without increasing the metadata overhead.
The files used to store table data over time may not be optimally sized. The storage optimizer analyzes this data and rewrites the files into the right-sized files so that queries can scan the appropriate number of these files, and retrieve data most efficiently. Why is the right size important? If the files are too big, then there’s overhead in eliminating unwanted rows from the larger files. If the files are too small, there’s overhead in reading and managing the metadata for the larger number of small files being read.
Cluster: Tables with user-defined column sort orders are called clustered columns; when you cluster a table using multiple columns, the column order determines which columns take precedence when BigQuery sorts and groups the data into storage blocks. BigQuery clustering accelerates queries that filter or aggregate by the clustered columns by only scanning the relevant files and blocks based on the clustered columns rather than the entire table or table partition. As data changes within the clustered table, BigQuery storage optimizer automatically performs reclustering to ensure that the cluster definition is continuously updated ensuring consistent query performance.
When a query begins execution in BigQuery, the query optimizer converts the query into a graph of execution, broken down into stages, each of which have steps. BigQuery uses dynamic query execution, which means the execution plan can evolve dynamically to adapt to different data sizes and key distributions, ensuring fast query response time and efficient resource allocation. When querying large fact tables, there is a strong likelihood that data may be skewed, meaning data is distributed asymmetrically over certain key values, creating unequal distribution of the data. Thus, a query of a skewed fact table is likely to cause more records for the skewed data over normal data. When the query engine distributes the work to workers to query skewed tables, certain workers may take longer to complete their task because there are excess rows for certain key values, i.e., skew, creating uneven wait times across the workers.
Let’s consider data that can show skew in its distribution. Cricket is an international team sport. However, it is only popular in certain countries around the world. If we were to maintain a list of cricket fans by country, the data will show that it is skewed to fans from full Member countries of the International Cricket Council and is not equally distributed across all countries.
Traditional databases have tried to handle this by maintaining data distribution statistics. However, in modern data warehouses, data distribution can change rapidly and data analysts can drive increasingly complex queries rendering these statistics obsolete, and thus, less useful. Depending on tables being queried on join columns, the skew may be on the table column referenced on the left side of the join or the right side.
The BigQuery team addressed data skew by developing techniques for join skew processing by detecting data skew and allocating work proportionally so that more workers are allocated to process the join over the skewed data. While processing joins, the query engine keeps monitoring join inputs for skewed data. If a skew is detected, the query engine changes the plan to process the joins over the skewed data. The query engine will further split the skewed data, creating equal distribution of processing across skewed and non-skewed data. This ensures that at execution time, the workers processing data from the table with data skew are proportionally allocated according to the detected skew. This allows all workers to complete their tasks simultaneously, thereby accelerating query runtime by eliminating any delays caused by waits due to skewed data.
“The ease to adopt BigQuery in the automation of data processing was an eye-opener. We don’t have to optimize queries ourselves. Instead, we can write programs that generate the queries, load them into BigQuery, and seconds later get the result.” – Peter De Jaeger, Chief Information Officer, AZ Delta
BigQuery’s documentation on Quotas and limits for Query jobs states “Your project can run up to 100 concurrent interactive queries.” BigQuery used the default setting of 100 for concurrency because it met requirements for 99.8% of customer workloads. Since it was a soft limit, the administrator could always increase this limit through a request process to increase the maximum concurrency. To support the ever-expanding range of workloads, such as data engineering, complex analysis, Spark and AI/ML processing, the BigQuery team developed dynamic concurrency with query queues to remove all practical limits on concurrency and eliminate the administrative burden. Dynamic concurrency with query queues is achieved with the following features:
Dynamic maximum concurrency setting: Customers start receiving the benefits of dynamic concurrency by default when they set the target concurrency to zero. BigQuery will automatically set and manage the concurrency based on reservation size and usage patterns. Experienced administrators who need the manual override option can specify the target concurrency limit, which replaces the dynamic concurrency setting. Note that the target concurrency limit is a function of available slots in the reservation and the admin-specified limit can’t exceed that. For on-demand workloads, this limit is computed dynamically and is not configurable by administrators.
Queuing for queries over concurrency limits: BigQuery now supports Query Queues to handle overflow scenarios when peak workloads generate a burst of queries that exceed the maximum concurrency limit. With Query Queues enabled, BigQuery can queue up to 1000 interactive queries so that they get scheduled for execution rather than being terminated due to concurrency limits, as they were previously. Now, users no longer have to scan for idle time periods or periods of low usage to optimize when to submit their workload requests. BigQuery automatically runs their requests or schedules them on a queue to run as soon as current running workloads have completed. You can learn about Query Queues here.
“BigQuery outperforms particularly strongly in very short and very complex queries. Half (47%) of the queries tested in BigQuery finished in less than 10 sec compared to only 20% on alternative solutions. Even more starkly, only 5% of the thousands of queries tested took more than 2 minutes to run on BigQuery whereas almost half (43%) of the queries tested on alternative solutions took 2 minutes or more to complete.” – Nikhil Mishra, Sr. Director of Engineering, Yahoo!
Most distributed processing systems make a tradeoff between cost (querying data on hard disk) and performance (querying data in memory). The BigQuery team believes that this trade-off is a fallacy and that users can have both low cost and high performance, without having to choose between them. To achieve this, the team developed a disaggregated intermediate cache layer called Colossus Flash Cache which maintains a cache in flash storage for actively queried data. Based on access patterns, the underlying storage infrastructure caches data in Colossus Flash Cache. This way, queries rarely need to go to disk to retrieve data; the data is served up quickly and efficiently from Colossus Flash Cache.
BigQuery achieves its highly scalable data processing capabilities through in-memory execution of queries. These in-memory operations bring data from disk and store intermediate results of the various stages of query processing in another in-memory distributed component called Shuffle. Analytical queries containing WITH clauses encompassing common table expressions (CTE) often reference the same table through multiple subqueries. To solve this, the BigQuery team built a duplicate CTE detection mechanism in the query optimizer.This algorithm reduces resource usage substantially allowing more shuffle capacity to be available to be shared across queries.
To further help customers understand their shuffle usage, the team also added PERIOD_SHUFFLE_RAM_USAGE_RATIO metrics to the JOBS INFORMATION_SCHEMA view and to Admin Resource Charts. You should see fewer Resource Exceeded errors as a result of these improvements and now have a tracking metric to take preemptive actions to prevent excess shuffle resource usage.
“Our teams wanted to do more with data to create better products and services, but the technology tools we had weren’t letting us grow and explore. And that data was growing continually. Just one of our data warehouses had grown 300% from 2014 to 2018. Cloud migration choices usually involve either re-engineering or lift-and-shift, but we decided on a different strategy for ours: move and improve. This allowed us to take full advantage of BigQuery’s capabilities, including its capacity and elasticity, to help solve our essential problem of capacity constraints.” – Srinivas Vaddadi, Delivery Head, Data Services Engineering, HSBC
The performance improvements BigQuery users experience are not limited to BigQuery’s query engine. We know that customers use BigQuery with other cloud services to allow data analysts to ingest from or query other data sources with their BigQuery data. To enable better interoperability, the BigQuery team works closely with other cloud services teams on a variety of integrations:
BigQuery JDBC/ODBC drivers: The new versions of the ODBC / JDBC drivers support faster user account authentication using OAuth 2.0 (OAuthType=1) by processing authentication token refreshes in the background.
BigQuery with Bigtable: The GA release of Cloud Bigtable to BigQuery federation supports pushdown of queries for specific row keys to avoid full table scans.
BigQuery with Spanner: Federated queries against Spanner in BigQuery now allow users to specify the execution priority, thereby giving them control over whether federated queries should compete with transaction traffic if executed with high priority or if they can complete at lower-priority settings.
BigQuery with Pub/Sub: BigQuery now supports direct ingest of Pub/Sub events through a purpose-built “BigQuery subscription” that allows events to be directly written to BigQuery tables.
BigQuery with Dataproc: The Spark connector for BigQuery supports the DIRECT write method, using the BigQuery Storage Write API, avoiding the need to write the data to Cloud Storage.
Taken together, these improvements to BigQuery translate into tangible performance results and business gains for customers around the world. For example, Camanchaca drove 6x faster data processing time, Telus drove 20x faster data processing and reduced $5M in cost, Vodafone saw 70% reduction in data ops and engineering costs, and Crux achieved 10x faster load times.
“Being able to very quickly and efficiently load our data into BigQuery allows us to build more product offerings, makes us more efficient, and allows us to offer more value-added services. Having BigQuery as part of our toolkit enables us to think up more products that help solve our customers’ challenges.” – Ryan Haggerty, Head of Infrastructure and Operations, Crux
Want to hear more about how you can use BigQuery to drive similar results for your business? Join us at the Data Cloud and AI Summit ‘23 to learn what’s new in BigQuery and check out our roadmap of performance innovations using the power of serverless.
Read More for the details.
AWS CodeBuild’s support for GPU-based workloads now run on an additional 4vCPU 1GPU machine type suited for less-resource intensive workloads.
Read More for the details.
San Francisco, here we come. Starting today, you can now register at the Early Bird rate of $899 USD* for Google Cloud Next ‘23, taking place in person, August 29-31, 2023.
This year’s Next conference comes at an exciting time. The emergence of generative AI is a transformational opportunity that some say may be as meaningful as the cloud itself. Beyond generative AI, there are breakthroughs in cybersecurity, better and smarter ways to gather and gain insights from data, advances in application development, and so much more. It’s clear that there has never been a better time to work in the cloud industry. And there’s no better time to get together to learn from one another, while we explore and imagine what all of this innovation will bring.
Returning to a large, in-person, three-day event opens up so much opportunity for rich experiences like hands-on previews, exclusive training and on-site boot camps, and face-to-face engagement with each other and our partners across our open ecosystem.
Our teams are busy designing experiences for you focused on six key topics:
Data cloud
AI & ML
Open infrastructure
Cybersecurity
Collaboration
DEI
And of course, in addition to dedicated AI and ML sessions, we’ll weave AI, including generative AI, throughout the event sessions to reflect the role these technologies play in innovations across nearly everything in cloud.
No matter your role or subject matter expertise, Next has content curated especially for you, including tracks for:
Application developers
Architects and IT professionals
Data and database engineers
Data scientists and data analysts
DevOps, SREs, IT Ops, and platform engineers
IT managers and business leaders
Productivity and collaboration app makers
Security professionals
Registerfor Next ’23 before May 31st to take advantage of the $899 USD early bird price – that’s $700 USD off the full ticket price — normally $1,599.*
We can’t wait to come back together as a community to welcome you to Next ’23.
*The $899 USD early bird price is valid through 11:59 PM PT on Wednesday, May 31, or until it’s sold out.
Read More for the details.
Cloud Code is a set of IDE plugins for popular IDEs that make it easier to create, deploy and integrate applications with Google Cloud. Cloud Code provides an excellent extension mechanism through custom templates. In this post, I show you how you can create and use your own custom templates to add some features beyond those supported natively in Cloud Code, such as .NET functions, event triggered functions and more.
As a recap, in my Introducing Cloud Functions support in Cloud Code post, I pointed out some limitations of the current Cloud Functions support in Cloud Code:
Only four languages are supported (Node.js, Python, Go, and Java) in Cloud Functions templates. I’ve especially missed the NET support.
Templates for Cloud Run and Cloud Functions are only for HTTP triggered services. No templates for event triggered services.
Testing only works against deployed HTTP triggered services. No testing support for locally running services or event triggered services.
Let’s see how we can add these features with custom templates!
In Cloud Code, when you create a new application with Cloud Code → New Application, it asks you to choose the type of the application you want to create:
For Kubernetes, Cloud Run, and Cloud Functions applications, it uses the templates defined in the cloud-code-samples repo to give you starter projects in one of the supported languages for those application types.
It gets more interesting when you choose the Custom application option. There, you can point to a GitHub repository with your own templates and Cloud Code will use those templates as starter projects. This is how you can extend Cloud Code – pretty cool!
On the Manage custom sample repositories in Cloud Code for VS Code page, there’s a detailed description on how a custom templates repository should look. There’s also a cloud-code-custom-samples-example GitHub repo and a nice video explaining custom sample templates:
Basically, it boils down to creating a public GitHub repository with your samples and having a .cctemplate file to catalog each template. That’s it!
We initially wanted to add support for only HTTP triggered .NET Cloud Functions, as this is currently missing from Cloud Code. However, we enjoyed creating these templates so much that we ended up completing a longer wish list:
Added templates for HTTP triggered Cloud Functions and Cloud Run services in multiple languages (.NET, Java, Node.js, Python).
Added templates for CloudEvents triggered (Pub/Sub, Cloud Storage, AuditLogs) Cloud Functions and Cloud Run services in multiple languages (.NET, Java, Node.js, Python).
Added lightweight gcloud based scripts to do local testing, deployment and cloud testing for each template.
You can check out my cloud-code-custom-templates repository for the list of templates.
To use these templates as starter projects:
Click on Cloud Code in VS Code
Select New Application → Custom Application → Import Sample from Repo
Point to my cloud-code-custom-templates repository
Choose a template as a starter project and follow the README.md instructions of the template.
Let’s take a look at some of these templates in more detail.
As an example, there’s a .NET: Cloud Functions – hello-http template. It’s an HTTP triggered .NET 6 Cloud Functions template. When you first install the template, the sample code is installed and a README.md guides you through how to use the template:
The code itself is a simple HelloWorld app that responds to HTTP GET requests. It’s not that interesting, but the template also comes with a scripts folder, which is more interesting.
In that scripts folder, there’s a test_local.sh file to test the function running locally. This is possible because Cloud Functions code uses Functions Framework, which enables Cloud Functions to run locally. Testing that function is just a matter of sending an HTTP request with the right format. In this case, it’s simply an HTTP GET request but it gets more complicated with event triggered functions. More on that later.
There’s also setup.sh to enable the right APIs before deploying the function, deploy.sh to deploy the function, and test_cloud.sh to test the deployed function using gcloud. I had to add these scripts as there’s no support in Cloud Code right now to deploy and test a function for .NET. As you see, however, it’s very easy to do with scripts installed as part of the template.
As you might know, Cloud Functions also support various event triggered functions. These events are powered by Eventarc in Cloud Functions gen2. In Cloud Code, there are no templates right now to help with the code and setup of event triggered functions.
In Eventarc, events come directly from sources (e.g. Cloud Storage, Pub/Sub, etc.) or they come via AuditLogs. We have some templates in various languages to showcase different event sources such as:
.NET: Cloud Functions – hello-auditlog: an AuditLog triggered .NET Cloud Functions template
Node.js: Cloud Functions – hello-gcs: a Cloud Storage triggered Node.js Cloud Functions template
Java: Cloud Functions – hello-pubsub: a Pub/Sub triggered Java Cloud Functions template
The event envelope is in CloudEvents format and the payload (the data field) contains the actual event. In .NET, the templates are based on the templates from the Google.Cloud.Functions.Templates package (which you can install and use with the dotnet command line tool to generate Cloud Function samples) and they handle the parsing of CloudEvents envelopes and payloads into strong types using the Functions Framework for various languages.
As before, each template includes scripts to test locally, deploy to the cloud, and test in the cloud. As an example, test_local.sh for the Cloud Storage template creates and sends the right CloudEvent for a Cloud Storage event:
This is very useful for local testing.
We have similar templates for Cloud Run as well. Some examples are:
.NET: Cloud Run – hello-gcs – a Cloud Storage triggered .NET Cloud Run template
Python: Cloud Run – hello-pubsub – a Pub/Sub triggered Python Cloud Run template
Node.js: Cloud Run – hello-auditlog – an AuditLog triggered Node.js Cloud Run template
Since these are Cloud Run services, they can’t use the Functions Framework. That means it’s up to you to parse the CloudEvents format using the CloudEvents SDK and the payload (the actual event) using the Google CloudEvents library. The templates take care of all these details and include the right SDKs and libraries for you out of the box.
At this point, you might be wondering: This is all great but I don’t have .NET or Node.js installed locally, how do I try all these templates?
I was pleased to learn that Cloud Code is available in Cloud Shell Editor. You can use Cloud Code and import these custom templates from your browser:
Moreover, since Cloud Shell already comes with .NET or Node.js pre-installed, you can build, run, test, and deploy all the samples using the scripts in the templates, right in your browser. This is pretty neat!
I’m impressed at how easy it is to extend Cloud Code with custom templates. It’s also pretty cool that you can use Cloud Code and any custom templates you create right in your browser without having to install anything, thanks to Cloud Shell.
If you’re interested in helping out with templates for other languages (I’d love Go support!), feel free to reach out to me on Twitter @meteatamel or simply send me a pull request in my cloud-code-custom-templates repo and I’ll be happy to collaborate. Thanks to Marc Cohen for contributing with Python templates and GitHub actions to share scripts between templates.
To learn more about Cloud Functions support in Cloud Code, try our new Create and deploy a function with Cloud Code tutorial.
Read More for the details.
An unreliable app is no fun to use. Ensuring that users experience high levels of availability, consistency, and correctness can go a long way in establishing user trust and positive business outcomes. Over time, as new features are added to applications and modifications made to underlying web services, the ability to comprehensively monitor at the application level becomes increasingly critical.
Google Cloud Monitoring’s Uptime checks is a lightweight observability tool that enables application owners to easily monitor the performance of an application’s critical user journeys. It continuously performs validations on resources to track availability, latency, and other key performance indicators. Uptime checks can be paired with alerts to track the quality of service, detect product degradation, and proactively reduce negative impact on users.
HTTP POST is the standard way to create or update a REST resource. Some common examples of this operation include creating an account, purchasing an item online, and posting on a message board. Monitoring changes and updates to resources is crucial to ensuring product features are working as intended. That’s why we’re excited to announce expanded support for POST requests to allow all content types, including custom content types.
Previously, Uptime checks only supported POST requests containing `application/x-www-form-urlencoded` bodies. Now, request bodies can be of any type, including but not limited to: `application/json`, `application/xml`, `application/text`, and custom content types. This functionality can be paired with response validation matching (JSON path, regex, response codes, etc.) to ensure POST endpoints are appropriately modifying all resources. Additionally, alerts can be added to notify service owners when their POST endpoints are behaving atypically.
To get started, you can head to Monitoring > Uptime, select “+ Create Uptime Check”, view advanced target options, then populate the new Content Type field.
Visit our documentation for creating uptime checks, where you can get additional information and step by step instructions for creating your first uptime check.
Lastly, if you have questions or feedback about this new feature, head to the Cloud Operations Community page and let us know!
Read More for the details.