Azure – General availability: Azure NetApp Files support for large volumes up to 500TiB in size
You can now create Azure NetApp Files large volumes between 50TiB to 500TiB in size.
Read More for the details.
You can now create Azure NetApp Files large volumes between 50TiB to 500TiB in size.
Read More for the details.
Advanced Container Networking Services for Azure Kubernetes Service (AKS) is a suite of services that tackles observability, security, and compliance challenges in your containerized applications. Gain deep insights with Advanced Network Observability, our first feature of the suite that unlocks Hubble metrics, CLI, and UI for powerful traffic monitoring and performance optimization.
Read More for the details.
Amazon DynamoDB local now supports configurable maximum throughput for individual on-demand tables and associated secondary indexes. Customers can use the configurable maximum throughput for on-demand tables feature for predictable cost management, protection against accidental surge in consumed resources and excessive use, and safe guarding downstream services with fixed capacities from potential overloading and performance bottlenecks. With DynamoDB local, you can develop and test your application with managing maximum on-demand table throughput, making it easier to validate the use of the supported API actions before releasing code to production.
DynamoDB local is free to download and available for macOS, Linux, and Windows. DynamoDB local does not require an internet connection and it works with your existing DynamoDB API calls. To get started with the latest version see “Deploying DynamoDB locally on your computer”. To learn more, see Setting Up DynamoDB Local (Downloadable Version).
Read More for the details.
The AWS Network Firewall service quota limit for stateful rules is now adjustable. The default limit is still 30,000 stateful rules per firewall policy in a Region, but you can request an increase up to 50,000. This firewall rule limit increase helps customers strengthen their security posture on AWS and mitigate emerging threats more effectively.
A higher rule limit provides flexibility to customers with large-scale deployments to define their firewall policy with different combinations of AWS managed and customer defined rules. Starting today, you can implement a broader range of rules to defend against various threats and scale as you grow on AWS.
Read More for the details.
Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in the AWS China (Ningxia) region. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists can now use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.
With a few clicks in the AWS Management Console, you can get started with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon Simple Storage Service (Amazon S3), access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes, as well as data in your operational databases, such as Amazon Aurora.
Read More for the details.
Amazon CloudWatch extended the duration during which customers can access their alarm history. Now, customers can view the history of their alarm state changes for up to 30 days prior.
Previously, CloudWatch provided 2 weeks of alarm history. Customers rely on alarm history to review previous triggering events, alarming trends, and noisiness. This extended history makes it easier to observe past behavior and review incidents over a longer period of time.
Read More for the details.
DMS Schema Conversion has released five generative artificial intelligence (AI)-assisted built-in functions to improve Oracle to PostgreSQL conversions. This launch marked the first ever gen AI-assisted conversion improvement in DMS Schema Conversion.
Customers can use these functions by applying the DMS Schema Conversion extension pack. The extension pack is an add-on module that emulates source database functions that aren’t supported in the target database and can streamline the conversion step.
DMS Schema Conversion is generally available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Singapore).
To learn more, visit Converting database schemas using DMS Schema Conversion. For more details on how to apply extension pack, go to Using extension packs in DMS Schema Conversion.
Read More for the details.
The Azure API Center extension for Visual Studio Code is now generally available. You can use this extension to build, discover, try, and consume APIs in your API center.
Read More for the details.
Hibernate VMs when not in use to save compute costs, while persisting the in-memory state.
Read More for the details.
Activity log alert rules can now be saved in European regions, ensuring the alert metadata and processing remains within EU Data Boundary.
Read More for the details.
At Beam, we’re building the next-generation digital social safety net. Our platform is used by state and local governments, community based organizations, and universities to administer public benefits and cash assistance programs. Our partners are able to drive low-friction application processes, tackle time-consuming eligibility determination processes, and disburse critical payments to vulnerable populations.
Recently, we migrated our entire infrastructure from AWS to Google Cloud with no downtime or impact to our users. Additionally, we transformed aspects of our application and engineering processes along the way to benefit from the newest cloud technologies. We are more effective and greener — all while cutting our cloud bill in half. In the following post, we’ll review Beam’s migration strategy, what we expected to get out of our move to Google Cloud, and some of our key results and takeaways.
When migrating to Google Cloud, Beam’s overall architectural strategy was to choose the services that would most effectively accelerate the integration of product-differentiating technologies, provide a scalable yet easy-to-understand cloud infrastructure, and introduce minimal administrative burden on the engineering team.
We want our engineers to focus on building a user-centered product with the features that our government partners need instead of supporting the outdated and inefficient infrastructure they used previously. This priority drove our decision to adopt Google Cloud services.
Beam’s platform consists of many containerized apps that need to interact and independently scale to deliver services to users. The platform used to run in self-hosted AWS Kubernetes clusters, but the engineering team found that the administrative burden of the clusters was holding them back. Using AWS to implement Kubernetes didn’t provide the ease of management and developer-friendly UI that we needed to scale.
This is where Google Cloud’s managed container offerings really shine. Cloud Run was the perfect fit for our small team, which wanted to deploy many highly scalable, interrelated container services with the least administrative burden possible. If we needed more flexibility down the road, we could easily redeploy our services to GKE. And since Cloud Run now supports sidecars and multi-container services, there was no reason not to adopt this fully managed container service.
Beam uses PostgreSQL for our core databases, and AlloyDB for PostgreSQL, Google Cloud’s fully managed PostgreSQL-compatible database, seemed like the perfect fit for our needs. Even though AlloyDB was a relatively new service when we began our migration, we wanted a highly scalable database with the ability to replicate across zones and regions for disaster recovery and high-volume reporting traffic — and AlloyDB fit that bill. Additionally, AlloyDB’s Query Insights dashboard seemed like it would close the database administration (DBA) gap and help us tune the performance of our application. Finally, we wanted a database product that didn’t require hand-holding and dedicated DBAs to manage, which would allow our teams to focus on innovation.
The accelerated pace of improvement around machine learning and AI technologies has generated both intense excitement and trepidation. If implemented responsibly, AI stands to reduce the time tax and revolutionize the government’s ability to serve its people. Done carelessly, however, AI could amplify harm and bias and leave vulnerable populations further behind.
In this challenging environment, Beam’s responsible AI strategy is enabled by cutting-edge Google Cloud AI, such as Vertex AI. Our AI offerings aim to radically reduce administrative burden on applicants and overhead on program administration by streamlining the eligibility determination process from documents and case histories. Google Cloud enables us to spend more time on responsibility and equity considerations than we could otherwise.
I’ve done around half a dozen major infrastructure migrations in my career, and Beam’s migration from AWS to Google Cloud was the smoothest. In part, this was thanks to Beam’s awesome engineering team, but also due to how easy Google Cloud makes a cloud-to-cloud migration.
Here’s a summary of how we did it:
We stood up all our infrastructure with Terraform so everything would be documented, repeatable, and compliant going forward.
Using Database Migration Service, we created continuous replication from our PostgreSQL databases in AWS to AlloyDB instances.
Meanwhile, our developers focused on modernizing our service architecture with things like Pub/Sub for asynchronous job queueing and Cloud Storage for file uploads.
While we were migrating to Google Cloud, we rebuilt our DevOps processes around GitHub.
We used GitHub Actions to deploy our container services to Cloud Run as part of our new CI/CD pipelines.
We ran load tests on our Cloud Run services to ensure they would scale while keeping latency low for our end users.
On cutover day, we briefly paused writes to our databases, and we switched our DNS from the AWS endpoints to the Cloud Run URLs.
Due to the continuous data sync between our previous databases and AlloyDB, our users experienced no outages or data loss during the switch, and for us at Beam, it was a huge relief to be running our application in a more lightweight yet modern cloud platform.
Six months later, we are thrilled with the performance of our platform rebuilt around Cloud Run and AlloyDB. With these improvements, we are on track to distribute hundreds of millions of dollars across program areas, including workforce development, child care, disaster recovery, guaranteed income, and housing.
Cloud Run just works — services scale up and down to match traffic patterns, and our deployments are easy to manage with GitHub Actions and container service definitions stored in yaml. When manual intervention is needed, for example, to route traffic between different application revisions, the Cloud Run UI is vastly superior to the container orchestration tools we previously used.
AlloyDB feels like having a team of DBAs working to keep our database healthy. Latency is shockingly low, even as our business has grown. With a few clicks in the console, we created a separate read-only pool for our data analytics and reporting needs, and we restored a point-in-time backup to a new cluster to retrieve a previous version of user data. And by purchasing committed use discounts, we’ve cut our database costs in half for the next three years.
Some more benefits we’ve seen include the following:
Security: Google prioritizes security in the areas important to Beam and its government partners. For example, default encryption at rest across all Google Cloud services, easy keyless service accounts with Workload Identity, and Security Command Center are some of the features that empower a small but security-focused engineering team.
Developer-friendly: Our developers find Google Cloud documentation easier to understand, and SDK integrations are straight-forward and quick compared to other cloud providers.
Innovation-enabling: Our cross-functional team was able to build effective AI models for pilot partners on top of Cloud Run and other services.
Socially-conscious and environmentally responsible: We’re proud to have Google Cloud as our strategic cloud partner for non-technical reasons too. Google Cloud’s sustainability strategy is the most ambitious of all the key cloud providers, and their philanthropic initiatives align with Beam’s mission and with the values of our partners and investors.
Most importantly, we’ve seen all these benefits for us and our partners while reducing our total monthly cloud costs by half!
Learn more about how Google Cloud can help your startup, and unlock resources at the recently launched Startup Learning Center.
Read More for the details.
The Google Cloud global front end is a solution we launched last year as part of Cross-Cloud Network that helps customers deliver and protect internet-facing web services using the same technologies, infrastructure, and teams that we use for our own Google web services. By leveraging the power of our global Cross-Cloud Network, it can do this for workloads hosted not only in Google Cloud but other public clouds, co-location and on-prem data centers.
The global front end solution consists of the Cloud External Global Application Load Balancer, providing open-scalable and programmable traffic control; Cloud CDN for performance and backend infrastructure offload; and Cloud Armor for planet-scale Web and DDoS protection.
At Google Cloud Next ‘24, we announced a series of enhancements across the solution to help our customers improve the performance, protection, and scalability of their internet-facing web services (sites, apps, and APIs) plus enable higher levels of automation and programmability. In this blog, we take a deeper look at the global front end solution, and how it uses the new capabilities in our networking platform.
Service Extension callouts
The newly released Service Extension callouts capability makes the Google Cloud web data plane programmable, allowing easier customization and improved partner solution integration. Service Extension callouts allow a wide range of control over various web requests and responses, including changing origin routing, adding security services to inspect and protect traffic, modifying the request headers, or adjusting the HTML.
In addition to Service Extensions, we have a library of example code for common operations. We are also have a broad set of partners who are delivering security and experience optimization integrations. You can read more about Service Extensions here.
Private origin access over the Internet with App Connector
The global external Application Load Balancer coordinates communication with backend infrastructure and today provides a broad set of options for connecting and serving from Google Cloud infrastructure such as Google Compute Engine and Google Kubernetes Engine (GKE), but also infrastructure outside Google Cloud such as other public clouds, on-prem data centers, or co-location facilities. It’s able to do this over the public internet using an FQDN or IP address or even using HA VPN links. Both options require opening ports from the web origin environment to the public internet. And while those can be secure to only allow Google Cloud communications, some customers don’t want any open ports to the Internet.
Today, we now have a new option for private origin communication over the internet using Google Cloud’s App Connector technology. This lets you run dual App Connector agents in another cloud, on-prem, or in a co-location environment. When the agents launch, they connect back to the App Connector service in Google Cloud, enabling a reverse-tunnel for origin communications without needing to open incoming ports from the internet. This capability is now in preview, and you can contact your sales team for more information and to gain access.
Custom error responses
Custom error responses, now in preview, allows you to customize your own error responses when HTTP 4xx and 5xx errors are generated. This lets you define your own error and maintenance pages with custom messages and branding. You can also create custom error pages when requests are denied by Cloud Armor security policies.
Load balancing for AI workloads
Generative AI workloads have unique traffic patterns, with large requests and responses along with unique backend compute usage. This can lead to variable processing times and uneven use of GPU and TPU compute resources, resulting in suboptimal user response times and higher infrastructure costs.
To address this, we are enhancing our Cloud Application Load Balancing with a new class of innovations to optimize for AI workloads. This includes load balancing based on LLM platform queue depth to optimize TPU and GPU utilization, and enhancements to monitor the health of individual model service endpoints for higher reliability for cross-region deployments. You can read more details in this blog.
Cloud Armor Enterprise for premium web and DDoS protection
Cloud Armor Enterprise is the new name for the premium tier of Cloud Armor protection with Adaptive Protection ML DDoS protection, Google Threat Intelligence, enhanced DDoS attack visibility, and more. In addition to the new name, we also introduced more flexible consumption models, with pay-as-you-go pricing in addition to the existing annual subscriptions.
Granular Adaptive Protection ML models for Layer 7 DDoS Defense
We have enhanced our Adaptive protection systems to allow the creation of more granular ML based traffic models, now in preview, to better detect service specific attacks and provide mitigations. This feature lets you configure specific hosts or paths that Adaptive Protection will analyze, such as the set of paths on your website related to new account creation or a checkout sequence for buying a product, experience, or booking.
Graph QL API Protection
GraphQL is an open-source data query and manipulation language for APIs. The Cloud Armor inspection engine and its built-in rules have been enhanced to allow it to help protect GraphQL-based API calls.
UI controls for dynamic compression with gzip and Brotli
To make performance even easier we recently enhanced the Google Cloud console to allow control of our dynamic compression capability. This feature automatically compresses text and code responses served by Cloud CDN by 60% to 85% in typical cases. It determines the requesting browsers’ capabilities and uses gzip or Brotli compression to reduce the size of the objects and help improve performance.
Internet observability with Catchpoint
We announced a new performance observability partner for our global front end solution with Catchpoint, whose Internet Performance Monitoring service helps customers monitor web performance globally and improve uptime to help catch issues before impacting the business. Catchpoint is offering a free trial and expert assistance to help measure the performance of the Google global front end. You can reach them on gcp-catchpoint-trial@catchpoint.com to learn more.
CI/CD Automation reference guide and toolkit
Our newly released Global Front End CI/CD Automation Toolkit and reference guide make it easier to integrate the global front end into a number of popular CI/CD platforms including Jenkins, Gitlab, and our own Cloud Build. It provides built-in recommended settings, pre-created workflows for common operations like roll-out, roll-back across the solution, and simplified workflows to enable canary rollouts of new versions of your applications and services. Since it’s a toolkit, you can use it in whole or just take the pieces you need to help with end-to-end automated deployment of your applications and services.
With the Cross-Cloud Network, we’re empowering customers to simplify, modernize and secure their hybrid and multi-cloud networks and applications. And using the global front end solution lets you deliver, scale, and protect your internet -facing web services using the same technologies, infrastructure, and teams as we use for our own web services.
See below for links to more information on the Cross-Cloud Network global front end solution:
ARC309, Designing a Cross-Cloud Network for Internet-facing Applications
Read More for the details.
Editor’s note: Today’s post includes content from our Google Cloud Technical Guides for Startups video series, which includes detailed guides to support startups along every step of the growth journey. Watch the entire series on-demand on the Google Cloud Tech channel or directly on our website — and don’t forget to subscribe to stay up to date.
Blockchain technology is finding real-world applications in industries around the world, with blockchain startups leading the way in areas like gaming, financial services, and identity verification. However, building and managing blockchain infrastructure can be complex and time-consuming.
This is where Google Cloud comes in.
Google Cloud offers a comprehensive suite of products and services that can help blockchain startups build, deploy, and manage their applications quickly, securely, and efficiently.
Blockchain technology has the potential to transform many aspects of our lives. However, despite its promise, blockchain also faces the following challenges that need to be addressed before it can achieve widespread adoption.
Scalability: Blockchain networks can be slow and inefficient, especially as the number of users and transactions increases. This is because each transaction needs to be verified by all nodes on the network, which can be a very time-consuming process.
Performance: Achieving fast and reliable network performance is another critical challenge for blockchain technology, particularly as the number of users and transactions on the network grows.
Security: Blockchain nodes are targets for cyberattacks, as they hold valuable data and can be used to manipulate the network.
Data: The sheer volume of data stored on blockchains can pose challenges for efficient data retrieval, scalability, and privacy preservation.
To help you address these challenges, Google Cloud offers a wide range of products and services available, tailored to the needs of blockchain startups. Let’s explore some of the key options.
Google Cloud’s robust network combined with Compute Engine allows startups to host their blockchain nodes and build decentralized applications (dApps) with ease. Compute Engine also supports managed instance groups (MIGs) with autoscaling, which can seamlessly adapt to fluctuating demand, automatically spinning up additional virtual machines (VMs) when needed and scaling down as demand subsides. Our VMs boot up in mere seconds, ensuring your dApps remain responsive and efficient.
Another popular Google Cloud service that makes it easy to deploy and manage containerized applications is Google Kubernetes Engine (GKE), our managed Kubernetes service. GKE is a great choice for companies building and managing Web3 applications, empowering developers with a scalable, self-healing infrastructure and granular monitoring while simplifying node management for blockchain providers like Blockdaemon.
Our latest offering for blockchain node hosting, Blockchain Node Engine, is a fully managed service designed specifically for Web3 development. It allows developers to quickly and easily deploy dedicated blockchain nodes, eliminating the need to manage the underlying infrastructure themselves. With Blockchain Node Engine, you don’t need to worry about SRE tasks; Google Cloud handles them for you. Startups can also use Blockchain Node Engine to bootstrap their dApp projects. For instance, startups can access Etherum block data in less than an hour and support for Polygon and Solana is now in public preview.
For startups who are looking for fast and secure transaction processing, Google Cloud network services leverage automation, advanced AI, and programmability to scale, secure and optimize your infrastructure. Beyond location flexibility, Google Cloud grants you granular control over your networking infrastructure, including firewall rules and IP addressing.
Google’s high-performance private network seamlessly connects blockchain nodes and your dApp with high-throughput, low-latency interconnects. We prioritize security and access control, offering IAM (Identity and Access Management) and KMS (Key Management Service) to ensure compliance with user access roles and enable secure submission and authorization of digital asset transactions using encryption keys and signatures.
With Google Cloud Blockchain Analytics, you can easily access historical blockchain data through BigQuery, simplifying data analysis with SQL queries. This eliminates the need for developers to manage nodes or develop custom indexers, saving time and resources. By leveraging BigQuery’s query engine and joining blockchain data with internal data, developers can gain deeper insights into user behavior and business operations.
In addition, BigQuery public datasets now include 11 additional blockchains in preview, addressing the growing demand for comprehensive data access in the Web3 space. The newly added blockchains include: Avalanche, Arbitrum, Cronos, Ethereum (Görli), Fantom (Opera), Near, Optimism, Polkadot, Polygon Mainnet, Polygon Mumbai, and Tron. By providing a readily accessible and comprehensive source of blockchain data, BigQuery empowers developers, analysts, and organizations to unlock the full potential of blockchain technology.
Likewise, Vertex AI empowers startups to unlock the hidden potential of blockchain data, combining scalability, real-time processing, and Google’s deep AI and machine learning expertise to deliver unparalleled insights. With cutting-edge generative capabilities, Vertex AI supercharges data analysis and drives new possibilities to explore the power of the blockchain, including generating synthetic data, crafting hyper-realistic simulations — or even using real-world data to create dynamic NFTs that can be used to engage with customers.
The Google Cloud Web3 startup program is designed to support and accelerate Web3 projects and startups. Participating in the program can unlock a treasure trove of benefits, helping to propel Web3 startups to success.
The program provides significant financial support with sizable credits for Google Cloud services, easing the burden of infrastructure costs and empowering you to focus on innovation. Furthermore, the program offers unique learning opportunities that keep startups ahead of the curve in the ever-evolving Web3 landscape. By joining, teams get access to advanced hands-on labs and cutting-edge Google Cloud technology, including early access to new products.For more information about how to join the community and get funding, expert support, and other specific Web3 benefits, visit the Web3 startup program website.
At Google Cloud, we are committed to supporting the Web3 community and fostering innovation in the blockchain space. By providing cloud tools and resources developers need to build and scale dApps securely and through our Web3 startup program, we are helping hundreds of blockchain businesses grow and create groundbreaking new solutions that will help shape the future of the internet.
Polygon
Google Cloud and Polygon Labs are partnering to make it easier for developers to build and use Polygon protocols, which will accelerate the growth of the Web3 ecosystem. Google Cloud is a strategic cloud provider for Polygon protocols, providing tools and infrastructure to help developers build, launch, and grow their Web3 products and dApps.
Blockchain Node Engine helps developers overcome the challenges of provisioning, maintaining, and operating their own blockchain nodes. Google Cloud Marketplace also offers one-click deployment of a Polygon PoS node to power dApps quickly and easily. Additionally, Google Cloud provides developers with access to the Polygon blockchain dataset on BigQuery, allowing them to analyze real-time, on-chain and cross-chain data to inform decision-making.
Nansen
BigQuery plays a pivotal role in blockchain analytics platform Nansen‘s ability to process and analyze vast amounts of blockchain data, reaching up to 1 petabyte per day. This real-time data processing capability is essential for Nansen to provide its users with up-to-date insights and make informed decisions in the rapidly evolving crypto market.
Furthermore, Google Cloud’s machine learning tools, particularly BigQuery ML and Cloud Inference API, have inspired Nansen’s AI predictive modeling and recommendations strategy. With Google Cloud, Nansen has been able to develop sophisticated algorithms that can uncover hidden patterns and trends in blockchain data, providing users with valuable insights and predictive recommendations.
Matter Labs/zkSync
ZkSync by Matter Labs is a Layer-2 protocol that scales Ethereum with cutting-edge zero-knowledge (ZK) technology. Since going to the mainnet in March 2023, It has become one of the fastest growing ZK rollups globally. Matter Labs has been using Google Cloud since their beginning, leveraging GKE and other Google Cloud services to keep up with their fast growth and high demand. As a result, they have optimized their proofing system for Google Cloud.
In 2024, Matter Labs and Google Cloud doubled down on their relationship in 2024 to fuel the next phase of their growth for their new hyperchain offering, which is a new Ethereum scaling solution proposed by Matter Labs.
Web3 is revolutionizing how the world interacts with data and applications, and we’re thrilled to be working alongside some of the pioneers helping to create this new reality. If you’re a blockchain startup seeking a reliable and scalable platform to build your business, look no further than Google Cloud. Get started today by signing up for a free trial and explore the vast array of resources available to help you succeed. See you in the cloud!
Read More for the details.
Ever struggled with managing firewall rules for sites like Google? In the past, it meant manually listing every single IP address associated with your domain. Talk about a headache!
But guess what? Things just got a whole lot easier! With the new FQDN feature in Cloud Next Generation Firewall (NGFW), you can simply specify the domain name (like www.google.com) in your firewall rule. No more endless lists of IP addresses to keep track of!
In the dynamic landscape of cloud computing, security is paramount. Cloud NGFW offers a robust set of features to safeguard your infrastructure, and among them is the fully qualified domain name (FQDN) feature. FQDN adds an extra layer of flexibility and precision to your firewall rules, allowing you to enhance security measures while simplifying network management. Let’s explore how.
Fully qualified domain names (FQDN) represent the complete domain name for a specific host on the internet that ultimately gets translated to an IP address when a connection to the host is established.
In the context of Google Cloud NGFW Standard, FQDN enables users to create firewall rules based on domain names rather than just IP addresses. This introduces a more flexible approach to controlling network traffic, as it allows for rule definition based on specific services or applications hosted on those domains even when associated IP addresses change dynamically.
Benefits of using FQDN
Improved reliability: FQDNs do not change when the underlying IP addresses change, such as traffic that’s routed through load balancers. This can help to reduce downtime and improve the reliability of your cloud workloads.
Easier to use: FQDNs are more human-readable and easier to remember than IP addresses. This can make your firewall rules more readable and easier to maintain.
Enhanced security: FQDNs can help to improve the security of your applications by making it more difficult for DNS spoofing attacks.
FQDN objects must adhere to standard FQDN syntax by following supported domain name format.
FQDN objects can be used in firewall policy rules within hierarchical, global, and regional network firewall policies to regulate traffic to or from specific domains.
Cloud NGFW periodically updates firewall policy rules containing FQDN objects with the latest domain name resolution findings, based on the VPC name resolution order of Cloud DNS. Cloud DNS notifies Cloud NGFW of any changes in DNS records. These updates are consistent with underlying VMs which ensures egress control reliability.
If multiple domain names resolve to the same IP address, the firewall policy applies to the IP address itself, treating FQDN objects as Layer 3 entities.
In egress firewall policy rules, if a domain includes CNAMEs in the DNS record, ensure all potential aliases are configured to guarantee consistent policy enforcement during DNS record changes. Failure to include all relevant aliases may result in policy malfunction.
Compute Engine internal DNS names can also be used in network firewall policy rules, provided that alternative name servers are not used in outbound server policy configurations.
For incorporating custom domain names in network firewall policy rules, Cloud DNS managed zones can be utilized for domain name resolution. Ensure that alternative name servers are not configured in the VPC network’s outbound server policy so that the records in managed zones are consulted.
Follow below guides for detailed step by step implementation of firewall policies using FQDN:
Implementing hierarchical firewall policies
Implementing global firewall policies
Implementing regional firewall policies
The following restrictions apply to both ingress and egress firewall rules employing FQDN objects:
FQDN objects do not support wildcard (*) or top-level (root) domain names, such as *.example.com and .org.
A domain name can resolve to a maximum of 32 IPv4 addresses and 32 IPv6 addresses. If DNS queries yield more than 32 IPv4 or IPv6 addresses, only the first 32 addresses are included. Therefore, avoid including domain names resolving to more than 32 IPv4 and IPv6 addresses in ingress firewall policy rules. However, note that this does not have any impact for using FQDN in egress firewall rules.
Certain domain name queries yield unique answers depending on the location of the requesting client. Firewall policy rules perform DNS resolution in the Google Cloud region containing the VM to which the rule applies.
When incorporating FQDN objects into ingress firewall policy rules, be aware of the following limitations:
Avoid using ingress rules utilizing FQDN objects if domain name resolution results are highly variable or if DNS-based load balancing is employed. For instance, numerous Google domain names utilize a DNS-based load-balancing scheme.
When using FQDN objects within firewall policy rules, you may encounter the following exceptions during DNS resolution:
Bad domain name: If a firewall policy rule contains one or more domain names in an invalid format, an error occurs. The rule cannot be created unless all domain names are correctly formatted.
Domain name does not exist (NXDOMAIN): If a domain name does not exist, Google Cloud disregards the FQDN object in the firewall policy rule.
No IP address resolution: If a domain name fails to resolve to any IP address, the associated FQDN object is disregarded.
NOTE: Cloud NGFW considers NXDOMAIN and IP address resolution failure cases functionally identical.
Unreachable Cloud DNS server: If a DNS server becomes unreachable, firewall policy rules employing FQDN objects apply only if previously cached DNS resolution results are accessible. Otherwise, the FQDN objects within the rule are ignored, either due to the absence of cached results or expiration of the cached DNS data.
Improved reliability: FQDNs do not change when the underlying IP addresses change, such as traffic that’s routed through load balancers. This can help to reduce downtime and improve the reliability of your cloud workloads.
Easier to use: FQDNs are more human-readable and easier to remember than IP addresses. This can make your firewall rules more readable and easier to maintain.
Enhanced security: FQDNs can help to improve the security of your applications by making it more difficult for DNS spoofing attacks.
Dive into the docs: Start exploring the Google Cloud NGFW documentation for a deeper understanding of FQDN objects and firewall rules.
Test it out: Create some FQDN objects and try implementing them in your own firewall rules. See how they simplify your workflow and enhance your security.
Share your knowledge: Help others benefit from FQDN objects by sharing this article with your colleagues and network.
With FQDN objects in your toolkit, you’re well on your way to a more secure and streamlined cloud environment. Enjoy the newfound ease and flexibility in your firewall management!
Read More for the details.
Chromebook Plus laptops are built to deliver powerful AI experiences at a better value
Last year, we introduced Chromebook Plus, a category of Chromebooks designed to offer up to 2x more processing power, memory and storage,1 along with software and AI capabilities that make handling advanced workloads a breeze.
Today, we are announcing the next set of Chromebook Plus laptops from Acer, along with new features powered by Google AI designed to help your teams improve productivity, creativity, and collaboration.
Acer Chromebook Spin 714
Jennifer Larson
General Manager of Commercial Client Segment at Intel
Help Me Write
Generative Wallpaper
Generative VC Backgrounds
Microsoft Progressive Web Apps
Google Task Integration
New Workspace Apps
Help me be more productive
Teams need tools that help them minimize their workload, not add to it. We’re introducing a set of features that empower users to work smarter and get work done more effectively:
Help me write for Chromebook Plus2 makes it easier to refine text on the web or in an app, whether managing a social account, finalizing a new blog, or updating web copy.Google Task and Calendar integration gives you access to your to-do list across your devices with one click and helps you stay organized throughout the day.Adaptive Charging predicts your device usage patterns, and helps preserve battery health by limiting the amount of time the device spends at 100% charge.
Later this year, look out for even more productivity features coming to Chromebook and Chromebook Plus:
You’ll now see a summary of previously open windows when logging back into your device, preventing you from losing your place when you sign back in and making it easy to jump back into your workflow.Focus allows users to minimize distractions by setting dedicated time to focus on your tasks based on your to-do-list and calendar.Help me read can summarize websites or PDF with a right click, and give you the power to ask followup questions about the content to gain deeper understanding.
Help me create and collaborate
Teams should be able to collaborate effectively, whether in person or remote. To support that collaboration, we’re launching new features to help enhance how you show up on calls, and allow you to put a personal touch on your devices.
Content creation features like Screencast make it easy to record, edit, and share content, automatically translated in up to 12 languages. Creating new global training programs, recording product demos, or rehearsing for a presentation? Screencast makes it easy. We’ve also enhanced our built-in screen capture tool to save screen recordings into GIF format for more animated presentations, demos, and more.Noise cancellation and lighting allow any app that accesses the microphone or camera on a Chromebook Plus laptop to benefit from Google AI powered noise cancellation and lighting capabilities, when enabled.Generative AI wallpapers and video call backgrounds allow for creative expression and a personal touch on enterprise devices. AI generated video call backgrounds help remote team members show up like pros when joining a meeting, across any application.
Later this year, we’ll be giving you even more tools to create and collaborate:
Get more out of your meetings with enhanced video call controls to customize your sound and picture directly from your Chromebook shelf.
AI extends beyond the OS with Gemini and Gemini for Google Workspace
While Chromebook Plus offers these features on the OS, it is also optimized for innovative web apps like Google Workspace and Gemini. Bringing these together, businesses can empower their workforce with Google apps designed to help users be productive, collaborative and creative.
Google Gemini offers chat to supercharge your ideas, write, learn, plan and more, and is pinned to your shelf on Chromebook Plus laptops for quick access.2Gemini for Google Workspace is a premium offering that delivers industry-leading AI capabilities and experiences throughout Google Workspace, helping you research and answer questions, generate copy, images and more. Together with Chromebook Plus features, teams can benefit from AI whether they are working in Google Workspace, Chrome browser, or ChromeOS.
Help me manage devices
To simplify device management, we’re enhancing the Google Admin console with Device Hub. Device Hub centralizes fleet information for IT admins and provides notifications and recommendations based on potential issues, making fleet management more seamless. Later this year, we’ll be releasing additional features built directly into the Google Admin console, all available with ChromeOS device management.
Get ready to supercharge your team
Chromebook Plus gives teams more freedom to build, collaborate, and manage than ever before. We are releasing these AI features on a rolling basis as an automatic update, starting with M126 in June.
And while features may not be immediately available on managed devices as we continue to build out robust IT controls, businesses deploying Chromebook Plus have much to look forward to this year.
Visit our website to learn more about Chromebook Plus, and find the right device for your business.
1 When compared to top selling Chromebooks from July 2022 to Dec 2022.
2 Admins will be able to control these features remotely using Google Admin console in the coming months.
Read More for the details.
The Graph Semantics extension to Kusto is now generally available. Users are encouraged to integrate this tool into their production workflows to enhance understanding of complex datasets and uncover hidden patterns.
Read More for the details.
Azure Site Recovery is introducing reporting capabilities to enable your BCDR Admin get rich insights into your estate protected with Site Recovery for audit and tracking purposes. These reports are highly customizable and are available out of box on Business Continuity Center, Recovery Service Vault and Backup Center. These reports provide historical evidence on failover jobs and replicated items.
Read More for the details.
Azure Site Recovery (ASR) now surfaces default alerts via Azure Monitor for critical events such as replication health turning unhealthy, failover failures, agent expiry, and so on. You can monitor these alerts via the Azure Business Continuity Center, Azure Monitor dashboard, or your Recovery Services vault and route these alerts to various notification channels of choice (Email, ITSM, Webhook, SMS).
Read More for the details.
Application volume group for Oracle is designed to greatly simplify the deployment of Azure NetApp Files volumes required to install and operate Oracle databases on Azure virtual machines at enterprise scale, with optimal performance and according to best practices.
Read More for the details.
New option helps developers manage Web PubSub resources
Read More for the details.