AWS – Zonal shift for Amazon Route 53 Application Recovery Controller now available in 18 additional Regions
Zonal shift for Amazon Route 53 Application Recovery Controller is now available in all standard AWS Regions.
Read More for the details.
Zonal shift for Amazon Route 53 Application Recovery Controller is now available in all standard AWS Regions.
Read More for the details.
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 3.4.0 for new and existing clusters. Apache Kafka 3.4.0 includes several bug fixes and new features that improve performance. Key features include a fix to improve stability to fetch from the closest replica. Amazon MSK will continue to use and manage Zookeeper for quorum management in this release. For a complete list of improvements and bug fixes, see the Apache Kafka release notes for 3.4.0.
Read More for the details.
Today, we are excited to announce the launch of the customizable dashboard feature on the AWS Batch console, providing a single view that displays resource metrics based on your specific needs. This feature allows you to redesign and target different types of widgets in your preferred order, making it easier to troubleshoot issues using widgets such as job logs, job queue metrics, etc. You can also add a container insights widget to track your compute environment utilization.
Read More for the details.
Azure Cold Storage, as the most cost-effective access tier with near real-time read latency for infrequently accessed unstructured data is available for public preview.
Read More for the details.
The latest Azure IoT Edge releases provide official packages for Red Hat Enterprise Linux 9 on AMD64 devices.
Read More for the details.
Electric vehicles already account for one in seven car sales globally, and with new gas and diesel cars being phased out across the world, global sales are forecast to reach 73 million units in 2040. But with power grids becoming increasingly dependent on variable energy sources such as wind and solar, rising demand from electric vehicles risks overstraining grids at peak times, potentially leading to power outages.
At Kaluza, we believe that our platform has a vital role to play in helping power grids and utility companies to stabilize their networks, while at the same time delivering more affordable, cleaner energy to the consumer. Powered by Google Cloud, the advanced algorithms behind our Kaluza Flex solution automatically charge electric vehicles when the power supply is at its cheapest and greenest, helping to accelerate the global transition towards a zero-carbon future.
Launched by OVO Energy in 2019, Kaluza has taken its deep understanding of the energy market to partner with some of the world’s major energy suppliers and vehicle manufacturers, including AGL in Australia, Fiat and Nissan in the UK, and Mitsubishi Corporation and Chubu in Japan, to launch smart charging programs that help customers save money while reducing their carbon footprint.
A good example of this is Charge Anytime, which we recently launched with OVO Energy in the UK. With this tariff, customers use Kaluza to smart-charge their electric vehicle, and pay just 10p per kWh — a third of their household electricity rate — to do so. This means that if the customer plugs in their vehicle to charge when they get home from work at, say, 6:00 p.m. — a time when both demand and the carbon intensity on the grid are at their highest — their vehicle will then be smartly charged at the lowest cost and greenest periods throughout the night, ready for when they need it in the morning.
This smart charging reduces the energy company’s costs by enabling them to take advantage of lower wholesale electricity prices. These savings are then passed on to the end customer through tariffs such as Charge Anytime, saving customers hundreds of pounds a year and reducing their carbon footprint. Meanwhile, the National Grid is able to reduce the strain on the network during peak hours, while simultaneously using up the excess renewable energy that might otherwise have gone to waste.
Behind Kaluza’s smart charging solution lies some sophisticated technology, all of which is built on Google Cloud. Our core optimization engine gathers real-time data from a wide range of sources, including battery and charging data from the electric vehicles, and data from the energy suppliers and grid operators, such as the carbon intensity, and price forecasts.
After passing through our real-time data backbone, that data is stored in BigQuery where it’s used to train and validate our smart charging optimization models. These models are then deployed with Google Kubernetes Engine so that whenever a customer plugs in an electric vehicle, data from that vehicle passes in real-time through our optimization engine to calculate the ideal charging schedule for that vehicle, ensuring it uses the cheapest, least carbon-intensive energy available.
Of course, the customer isn’t aware of any of this complexity. All they need to do is open their charging app and use Kaluza’s intuitive interface to set what time they want their car to be ready and how much charge they want in their battery. Then they simply plug in their car, and our algorithms take care of the rest.
Customers can also use Kaluza to view breakdowns of how much carbon and money they’ve saved, along with insights around things like billing and battery life, all of which is backed by Cloud SQL.
With Google Cloud, we were able to roll out this end-user app very quickly. Instead of having to build a different version of the app for each operating system, we were able to build an OS-agnostic version using Flutter, which then builds the app for each platform, enabling us to get to market faster.
This has been a benefit of our architecture in general. With Google Cloud taking the complexity out of otherwise time-consuming development processes, we’ve been able to experiment with and validate propositions rapidly, and roll out new products and features at speed, to ensure that we remain at the vanguard of a rapidly evolving sector.
As for the grid operators and energy companies, the Kaluza platform allows them to visualize how many participating electric vehicles are plugged into the network at any one time. BigQuery and Looker Studio dashboards provide granular insights, such as how many vehicles are idle, how many are charging, and how well our optimization engine is working.
The platform also allows companies to view those vehicles on an aggregate level, and identify any issues. Google Cloud machine learning capabilities can even allow grid operators to use this aggregate view to forecast how much energy will be required at any one time, and dial the power generation up or down accordingly.
Ultimately, these insights help network operators and utility companies to optimize energy usage, and balance out the peaks and troughs of supply and demand, ensuring that excess renewable energy is captured, while carbon-intensive fuel use is reduced.
Vehicle-to-Grid (V2G), or bidirectional, charging is something that we are very excited about, and have begun rolling out in the UK, as we prepare to launch in other global markets. Built on the same Google Cloud architecture as the rest of Kaluza Flex, V2G not only enables smart charging, but allows electric vehicles to feed stored energy back into the grid.
Imagine an electric vehicle battery that can store 40 kWh of energy, but only uses 5 kWh a day. That leaves 35 kWh a day that can be charged to the battery during times of low demand, when energy is cheapest and greenest, then fed back into the network during peak periods, removing the need for more fossil fuels to be burned to meet demand.
Not only does V2G go further than conventional smart charging to make use of renewable energy when it’s abundant and support the balancing of the energy system, it also results in even lower energy prices for the customer, as they are effectively selling energy back to the grid. For example, as part of a large domestic V2G trial made in partnership with OVO and Nissan, Kaluza saved customers an average of £450 a year, with some customers saving up to £800/year by selling surplus energy back to the grid — transforming their homes into mini power stations.
With Kaluza Flex, we have a platform that offers benefits for all parties, from the grid operators, through to energy retailers and vehicle manufacturers, and all the way to the customer. Now, our aim is to bring this exciting offering to as many markets as possible, a goal which is made easier thanks to the scalable infrastructure and time-saving solutions of Google Cloud.
As more people make the switch to electric vehicles, our goal is to ensure that smart charging becomes standard practice, as we help to deliver on the potential of electric vehicles to contribute to a greener, decarbonized future.
Read More for the details.
The Data Cloud & AI Summit is Google Cloud’s global event that showcases latest innovations and how customers are transforming their business with a unified, open and intelligent data platform. At our third annual event, we shared the latest product launches across generative AI and Data Cloud, learnings from customers and partners, and provided best practices to support your data driven transformation. In case you missed it, here are three highlights to help you level up your data and AI know-how.
We announced new product innovations that can help optimize price-performance, help you take advantage of open ecosystems, securely set data standards, and bring the magic of AI and ML to existing data, while embracing a vibrant partner ecosystem.
Generative AI innovations: A range of foundation models were made available to developers and data scientists through Generative AI support on Vertex AI — developers benefit from easy API access and data scientists have a full suite of tuning options for customizing foundation models. Gen App Builder is a brand new offering that brings together foundation models with the power of search and conversation AI to enable enterprises to develop new generative AI apps.
BigQuery Editions provide more choice and flexibility for you to select the right feature set for various workload requirements. Mix and match among Standard, Enterprise, and Enterprise Plus editions to achieve the preferred price-performance by workload. We also introduced innovations for autoscaling and a new compressed storage billing model.
AlloyDB Omni, a downloadable edition of AlloyDB designed to run on-premises, at the edge, across clouds, or even on developer laptops. AlloyDB Omni offers the AlloyDB benefits you’ve come to love, including high performance, PostgreSQL compatibility, and Google Cloud support, all at a fraction of the cost of legacy databases.
Looker Modeler allows you to define metrics about your business using Looker’s innovative semantic modeling layer. Looker Modeler is the single source of truth for your metrics, which you can share with the BI tools of your choice, such as PowerBI, Tableau, and ThoughtSpot, or Google solutions like Connected Sheets and Looker Studio, providing you with quality data to make informed decisions.
New partnerships: Over 900 software partners power their applications using Google’s Data Cloud. We announced partnerships to bring more choice and capabilities for customers to turn their data into insights from AI and ML, including new integrations between DataRobot and BigQuery, ThoughtSpot and multiple Google Cloud services — BigQuery, Looker, and Connected Sheets — and Google Cloud Ready for AlloyDB, a new program that recognizes partner solutions that have met stringent integration requirements with AlloyDB.
Watch the keynotes and dive into the breakout sessions in the AI and Data Essential tracks to learn more.
Customers are at the heart of everything we do at Google Cloud. Here are some stories you might have missed from the event. Dig in!
Booking.com, one of the largest online travel agencies, talks about how Google Cloud has been a true platform-as-a-service for their business. In this session, they highlight how BigQuery, Dataflow, and Cloud Spanner force-multiply each other when used together. BigQuery accelerated petabyte-scale queries from hours to seconds, Dataflow reduced development time and run time by 30x and Spanner reduced complexity with online schema evolution and federated queries.
Dun & Bradstreet is building a data cloud with Google Cloud, a centralized data lake and unified processing platform to consolidate all its data, share it data with customers, and achieve better performance and reduce costs. Their session at the summit has all the details.
Orange France, a major Telco company in France, discusses how BigQuery and Vertex AI provide the foundation to increase revenue, maximize savings and improve customer experiences.
Richemont, a Switzerland-based luxury goods holding company, accelerates insights with SAP and Google Cloud Cortex Framework. In this session, Richemont talks about their innovation with advanced analytics powered by Google’s Data Cloud and how they’ve accelerated their time-to-value for their business.
ShareChat, an Indian social media platform with 340 million users, leverages Spanner and Bigtable to build differentiated features rather than worrying about managing underlying databases. Using an autoscaler with Bigtable and Spanner allowed them to reduce the cost of running these systems by 70%. Some of their data science clusters running on Bigtable scale from 30 nodes to 175 nodes and then back to 30 nodes during a single day. Learn more about their story in this session.
Tabnine joins CI&T, an end-to-end digital transformation partner, to discuss generative AI and why leveraging it for your developers is the ideal place to start.
Looking to hone your understanding of everything data and AI? Consider these resources.
Product and solution demos – Check out these demos for inspiration and insights into how Google Cloud’s products and solutions can solve your most pressing data and AI challenges. And in case you didn’t see this end-to-end data cloud & AI demo in action, you’re in for a treat, with methods and solutions developers can use today to unlock the power of data and AI with Google Cloud.
Learning and certifications – Find your learning path. Grow your cloud skills. Continue your cloud journey with the insights, data, and solutions across everything that’s cutting edge in cloud.
Hands on Labs – Try things out, without really breaking anything. Get started with Hands on Labs for products such as; BigQuery, Spanner, AlloyDB, Looker, LookML and more.
At Google Cloud, we believe that data and AI have the power to transform your business. Thank you to our customers and partners who are on this journey with us. To learn more about what you’ve read, watch the sessions on-demand and make sure to join our Data Cloud Live events series happening in a city near you. Get started at cloud.google.com/data-cloud to learn how tens of thousands of customers build their data clouds using Google Cloud.
Read More for the details.
At Google Cloud, we are committed to supporting our customers who want to use applications hosted outside of Google Cloud. In 2022, we were named the Overall Leader in the 2022 KuppingerCole Zero Trust Network Access Leadership Compass, in part because we introduced the BeyondCorp Enterprise app connector. The app connector can help customers provide Zero Trust access to applications in multi-cloud environments.
To help make it easier for administrators to connect and configure applications hosted outside Google Cloud, our enhanced BeyondCorp Enterprise app onboarding experience includes a new step-by-step workflow to onboard web applications and auto-provision load balancers and backend services. Here’s a look at the “Connect New Application” interface in action.
You can also set up the app connector on your own in minutes using the Google Cloud console or through our APIs.
A U.S.-based contractor and Google Cloud customer has told us they’re deploying the BeyondCorp Enterprise app connector to help simplify and extend their connectivity to a set of web applications hosted in a private data center. Since the app connector works by establishing reverse connectivity and is simpler to configure and operate than options like a VPN tunnel, this customer was able to reduce their implementation time from months to days.
Using app connector, customers are also able to extend Zero Trust access controls to web apps hosted on third-party clouds. End-users can remotely access these apps without a VPN from anywhere in the world.
Many organizations are attempting to establish multi-cloud networks, but connecting from one cloud to another using a VPN (Virtual Private Network) is difficult and can take months to set up. Additionally, opening specific ports for data ingress, a common practice for any data center or cloud, introduces security risks.
The BeyondCorp Enterprise app connector helps remove the inherent complexities of configuring and maintaining a VPN. With the app connector in place, customers simply need to onboard their applications and the connectivity infrastructure is completely managed by Google. Once connectivity is established to Google Cloud,customers have access to a fast, reliable, and low-latency global network that stitches users and applications together with high levels of performance and availability.
If you’d like to learn more, visit our documentation for app connector and our BeyondCorp Enterprise webpage.
Read More for the details.
As our cloud customers scale their environments, they need to manage cloud resources and policies. Our biggest customers have millions of assets in their Google Cloud environments. Securing growing environments requires tools to help discover, monitor, and secure cloud assets. To help, Security Command Center (SCC), our security and risk management solution, now includes new asset query functionality designed to make it easier for IT and security teams to identify assets in large, complex environments.
Security Command Center users can now perform SQL-like queries to get detailed information on where assets are located and how they are configured. This includes enumerating assets based on resource type, resource relationship, operating system configuration, and organizational policy metadata. Asset query runs on top of our near real-time metadata store of more than 275 Google Cloud asset types across compute, network, storage, and more.
To make asset query easy, we made it a fully-managed capability, so there is minimal setup. SCC users can jump right into writing simple queries. This eliminates the need to export asset data, configure a data warehouse such as Big Query, or employ expensive third-party tools that require manual query operations.
Next, we made it simple for users who may not be comfortable authoring queries by including a library of pre-built queries to help answer common environmental or postural questions, such as:
Which storage buckets are publicly accessible?
Which user-managed service account keys are old, but still in use?
How many assets of a particular type are deployed in my project?
We also made it easy to see the relationships between assets in the environment. For example, with a single query users can discover which services make up a defined App Engine application, or they can quickly determine if a specific GKE cluster has a particular node.
In addition to an accurate inventory of their current cloud assets, IT and security teams need the ability to review the history of their cloud environment, including what changed and when changes were made. With asset query, SCC users can quickly view their inventory status at any point during the prior 35 days, and see what changes occurred during a specified time range up to seven days, such as:
How many VM instances in the us-east region did my organization have at 2:00 PM yesterday?
What configuration changes occurred to my VMs in the us-west region in the past five days?
Query results are easily shared with internal stakeholders by exporting results via a simple CSV file, or by exporting to BigQuery.
To learn more about the asset query capabilities now available in Security Command Center Premium, please visit: https://cloud.google.com/asset-inventory/docs/query-assets. To get started with SCC, contact a Google Cloud sales representative.
Read More for the details.
Consumers today have more options than ever, which means businesses need to be dedicated to bringing the best-possible device performance to end users. At leading mobile device manufacturer OPPO, we’re constantly exploring ways to make better use of the latest technologies, including cloud and AI. One example is our AndesBrain strategy, which aims to make end devices smarter by integrating cloud tools with mobile hardware in the development process of AI models on mobile devices.
OPPO adopted this strategy because we believe in the potential of AI capabilities on mobile devices. On one hand, running AI models on end devices can better protect user privacy by keeping user data on mobile hardware, instead of sending them to the cloud. On the other hand, the computing capabilities of mobile chips are rapidly increasing to support more complex AI models. By linking cloud platforms with mobile chips for AI model training, we can leverage cloud computing resources to develop high-performance machine learning models that adapt to different mobile hardware.
In 2022, OPPO started implementing the AI engineering strategy on StarFire, which is our self-developed machine learning platform that merges cloud with end devices and serves, forming one of the six capabilities of AndesBrain. Through StarFire, we’re able to take advantage of various advanced cloud technologies to meet our development needs. To facilitate the AI model development process and enhance AI capabilities on mobile devices, we’ve collaborated with Google Cloud and Qualcomm Technologies to embed the Google Cloud Vertex AI Neural Architecture Search (Vertex AI NAS) on a smartphone for the first time. Let’s explore what we learned.
One major bottleneck of developing AI models on mobile devices is the limited computing capabilities of mobile chips compared to computer chips. Before using Vertex AI NAS, OPPO’s engineers mainly used two methods to develop AI models that can be supported by mobile devices. One is simplifying the neural networks trained on cloud platforms through network pruning or model compressing to make them suitable for mobile chips. The other is adopting lighter neural network architectures built on technologies like depthwise separable convolutions.
These two methods come with three challenges:
Long development time: To see if an AI model can smoothly run on a mobile device, we need to repeatedly run tests and manually adjust the model according to the hardware characteristics. As each mobile device has different computing capabilities and memory, the customization of AI models requires significant labor costs and leads to long development time.
Lower accuracy: Due to their limited computing capabilities, mobile devices only support lighter AI models. However, after AI models trained on cloud platforms are pruned or compressed, the accuracy rate of the models decreases. We might be able to develop an AI model with a 95% accuracy rate in a cloud environment, but it won’t be able to run on end devices.
Performance compromisation: For each AI model on mobile devices, we need to reach a balance among accuracy rate, latency, and power consumption. High accuracy rate, low latency, and low power consumption can’t be achieved at the same time. As a result, performance compromises are inevitable.
The neural architecture search technology was first developed by the Google Brain team in 2017 to create AI trained to optimize the performance of neural networks according to developers’ needs. By automatically discovering and designing the best architecture for a neural network for a specific task, the neural architecture search technology enables developers to more easily achieve better AI model performance.
Vertex AI NAS is currently the only fully-managed neural architecture search service available on a public cloud platform. As OPPO’s machine learning platform StarFire is cloud-based, we can easily connect Vertex AI NAS with our platform to develop AI models. On top of that, we chose to adopt Vertex AI NAS for on-device AI model development because of the following three advantages:
Automated neural network design: As mentioned, developing AI models on mobile devices can be labor intensive and time consuming. Because the neural network design is automated through Vertex AI NAS, we can greatly reduce development time and easily adapt an AI model to different mobile chips.
Custom reward parameters: Vertex AI NAS supports custom reward parameters, which is rare among the NAS tools. This means that we can freely add the search constraints that we need our AI models to be optimized for. By leveraging this feature, we have added power as a search constraint and successfully lowered the energy consumption of our AI model on mobile devices by 27%.
No need to compress AI models for mobile devices: Based on the real-time rewards sent back from the connected mobile chips, Vertex AI NAS can directly design a neural network architecture suitable for mobile devices. The end result can be run on end devices without being further processed, which saves time and effort for AI model adaptation.
Lowering power consumption is key to providing excellent user experience for AI models on mobile devices, particularly the computing intensive models related to multimedia and image processing. If an AI model consumes too much power, mobile devices can overheat and quickly drain their battery life. That is why the primary aim of using Vertex AI NAS for OPPO is to boost energy efficiency of AI processing on mobile devices.
To achieve this goal, we first added power as a custom search constraint to Vertex AI NAS, which only supports latency and memory rewards by default. This way, Vertex AI NAS can search neural networks based on the rewards of power, latency, and memory, letting us reduce power consumption of our AI models while reaching our desired levels of latency and memory consumption.
Then, we connected the StarFire platform with Vertex AI NAS through Cloud Storage. At the same time, StarFire is linked with a smartphone equipped with Qualcomm’s Snapdragon 8 Gen 2 chipset through the SDK provided by Qualcomm. Under this structure, Vertex AI NAS can constantly send the latest neural network architecture via Cloud Storage to StarFire, which then exports the model to the chipset for testing. The test results are sent back to Vertex AI NAS again through StarFire and Cloud Storage, allowing it to conduct the next round of architecture search based on the rewards.
This process was repeated until we achieved our target. In the end, we realized a 27% reduction in power of our AI model and a 40% reduction in computing latency, while maintaining the same level of accuracy rate before the optimization.
The first successful AI model optimization through Vertex AI NAS is truly exciting for us. We plan to deploy this energy efficient AI model on our future smartphones, and implement the same model training process supported by Vertex AI NAS in the algorithm development of our other AI products. Besides power, we also hope to add other reward parameters, such as bandwidth and operator friendliness, as search constraints to Vertex AI NAS for more comprehensive model optimization.
Vertex AI NAS has significantly facilitated the optimization of our AI capabilities on smartphones, and we believe that there is still great potential to explore. We will continue collaborating with Google Cloud to expand our use of Vertex AI NAS. For the developers who are interested in adopting Vertex AI NAS, we advise targeting the most relevant hardware reward parameters before launching the development process, and becoming familiar with the ways to build search spaces if custom search constraints are needed.
Special thanks to Yuwei Liu, Senior Hardware Engineer at OPPO, for contributing to this post.
Read More for the details.
In Google Cloud, IAM Policies provide administrators with fine-grained control over who can use resources within their Google Cloud organization. With Organization Restrictions, a new generally available Google Cloud security control, administrators can restrict users’ access to only resources and data in specifically authorized Google Cloud organizations. It does this by restricting Google Cloud organization access to traffic originating from corporate managed devices.
Even for well-defended and managed Cloud organizations, there are multiple ways an attacker might seek to exfiltrate data. For example, a threat actor could create a rogue organization and grant your company’s employees access to it. The threat actor is banking on human error, hoping one of your company employees mistakenly uploads sensitive information to this rogue organization instead of the company’s actual organization in Google Cloud. Similarly, a malicious insider could deploy their own rogue Google Cloud organization, grant their corporate identity access to this rogue organization via IAM policy, and exfiltrate corporate data to this destination.
Organization Restrictions mitigates the risk of these data exfiltration events by allowing security administrators to set guardrails on what resources their principals or users are allowed to interact with regardless of what access permissions they have been granted via IAM policies.
Organization Restrictions are implemented for corporate-managed devices which have been configured to route all of their traffic to Google Cloud through a corporate-managed egress proxy:
Security administrators configure the egress proxy to insert a newly-introduced HTTP header called X-Goog-Allowed-Resourcesfor all Google Cloud-bound requests. The header value contains a list of authorized Google Cloud organizations that can be accessed by requests traversing the proxy. Once a request containing this header reaches Google Cloud, the Organization Restrictions service enforces that the request can only access resources that belong to the Google Cloud organizations specified.
For example, the sample header value below specifies that all requests containing this header can only access resources in the 11111111 and 22222222 Google Cloud organizations:
Once an administrator drafts this header value and encodes it in a web-safe base64 format, they configure their egress proxy to insert this header for all Google Cloud-bound requests. Subsequently, employee access requests for resources not parented by either of these organizations will be denied access. That’s it — you have now successfully added another layer of protection against unauthorized access to your resources and data.
Organization Restrictions can be enabled in egress proxies provided by our security partners. Customers have the flexibility to choose their preferred vendor’s egress proxy as long as it satisfies these prerequisites. F5 Networks, Fortinet, and Palo Alto Networks are some of the partners that help us deliver Organization Restrictions in conjunction with their proxy products. Here’s what they had to say:
F5 Networks
“We are excited to collaborate with Google Cloud to further support our customers to strengthen their security and protect their Google Cloud environment. Our F5 BIG-IP SSL Orchestrator integration with Google Cloud Organization Restrictions enables our joint customers to restrict access to only authorized Google Cloud Organizations and helps prevent data exfiltration from insider attacks. This integration provides another tool in our customers’ arsenal to secure their Google Cloud environment.” — Kevin Stewart, Principal Product Manager, F5
Fortinet
“Companies face extreme pressure to deliver consistent, enterprise-grade security across their entire business – from on-premises data centers, to branches and cloud deployments. It is critical that we as an industry continue to deliver new solutions, features, and security controls, such as Google Cloud’s Organization Restrictions, to meet the evolving cybersecurity requirements of today’s businesses. We are proud to be a Google Cloud partner and look forward to the continued collaboration and innovation as we work together to help customers reduce their overall technology complexity and minimize their attack surface.” – Vincent Hwang, Senior Director of cloud security at Fortinet
Palo Alto Networks
“Our Next-Generation Firewalls help protect our customers from unauthorized or unintended data leaks. We are excited about Google Cloud’s Organization Restrictions capability that helps prevent data exfiltration for our cloud customers. Our firewalls can easily be configured to block traffic based on the new Google Cloud Org Restrictions header, giving our joint customers another layer of protection for their sensitive data.” — Mukesh Gupta, VP Product Management, Palo Alto Networks
Organization Restrictions is available at no additional cost for Google Cloud users. You can get started with Organization Restrictions by visiting our documentation page where you can learn more about egress proxy prerequisite requirements, additional configuration options, and Google Cloud services which support organization restrictions enforcement. Lastly, if you are looking to test this feature without the use of an egress proxy, we recommend that you visit the step-by-step testing guide.
Resources:
Google Cloud: Configuring Organization Restrictions
Google Cloud: Validated partner solutions for Organization Restrictions
F5 Networks: Enabling Organization Restrictions with BGP-IP SSL Orchestrator
Read More for the details.
Last year BigQuery introduced Remote Functions, a feature that allows users to extend BigQuery SQL with their own custom code, written and hosted in Cloud Functions or Cloud Run. With Remote Functions, custom SQL functions can be written in languages like Node.js, Python, Go, Java, NET, Ruby, or PHP, enabling a personalized BigQuery experience for each organization, while leveraging its standard management and permission models.
We’ve seen an amazing number of use cases enabled by Remote Functions. Inspired by our customers’ success, we decided to document the art of the possible on this blog, providing a few examples, sample code and video instructions to jumpstart your Remote Function development.
Imagine multinational organizations storing, for example, customer’s feedback in various languages inside a common BigQuery table. Translation API could be used to translate all content into a common language and make it easier to act on the data.
For this specific example, we’ve created an end to end tutorial for extending BigQuery with the Cloud Translation API. You can get all the instructions at https://cloud.google.com/bigquery/docs/remote-functions-translation-tutorial.
Analyzing unstructured data can be a daunting task. The combination of a Remote Function and Cloud Vision API can help organizations derive insights from images and videos stored in Google Cloud via SQL, without leaving the BigQuery prompt.
Imagine if organizations could assign labels to images and quickly classify them into millions of predefined categories or detect objects, read printed and handwritten text, and build valuable metadata into your image catalog stored in BigQuery. And all of this processing via BigQuery SQL. This is what this example is all about.
We’ve created an end to end, easy to follow tutorial for this use case as well. You can get all instructions at https://cloud.google.com/bigquery/docs/remote-function-tutorial.
The Natural Language Processing API lets you derive insights from unstructured data with machine learning. With remote functions, this text processing can be combined with BigQuery SQL.
This example focuses on the ability of delivering insights from unstructured text stored in BigQuery tables using Google Machine learning and SQL. A simple use case could be an application gathering social media comments and storing them in BigQuery while performing sentiment analysis on each comment via SQL.
The sample code (main.py and requirements.txt) can be found in this repo.
Once the Python code is deployed as a Cloud Function, you can create the BigQuery Remote Function using the syntax below:
For more information on how to create a remote function, please see this documentation.
A screenshot of the working function can be seen below.
Protection of sensitive data, like personally identifiable information (PII), is critical to every business.
With Remote functions, a SQL call can be made to integrate functionality provided by the Cloud Data Loss Prevention API, without the need to export data out of BigQuery. Since the remote function calls are done in-line with SQL, even DML statements can be performed on the fly using the outcome of the function as an input value for the data manipulation.
This example focuses on the ability to perform deterministic encryption and decryption of data stored in BigQuery tables using Remote Functions along with DLP.
The sample code (main.py and requirements.txt) can be found here. Please notice:
References to <change-me> on main.py will need to be adjusted according to your GCP environment
The code is inspecting data for the following info_types: PHONE_NUMBER, EMAIL_ADDRESS and IP_ADDRESS. Feel free to adjust as needed
Cloud Key Management Service (KMS) and Data Loss Prevention APIs will need to be enabled on the GCP project
A DLP Keyring and Key will be required. For directions, click here
The key will need to be wrapped (instructions)
DLP User role will need to be assigned to the service account executing the Cloud Function (by default the compute engine service account)
Once the Python code is deployed as a Cloud Function, you can create BigQuery Remote Functions using the syntax below:
For more information on how to create a remote function, take a look at the documentation.
The deterministic encryption/decryption functions we are using on phone numbers are. The picture below demonstrates a phone number being encrypted by the function:
Since deterministic encryption and decryption techniques are being used, the picture below demonstrates the phone number can be decrypted back to its original value by calling the dlp_decrypt function with the hashed value created by the dlp_encrypt function.
Below is an example of a BigQuery table creation, selecting data from an existing table while encrypting the values of any phone_number, email address or email values found inside the call_details column:
Check the full demo video here.
Extract, Load, Transform (ELT) is a data integration process for transferring raw data from a source server to a target server such as BigQuery and then preparing the information for downstream uses. With ELT, the raw data is loaded into the data warehouse or data lake and transformations occur on the stored data.
When working with BigQuery, it’s common to see transformations being done with SQL and called via stored procedures. In this scenario, the transformation logic is self-contained, running inside BigQuery. But what if you need to keep external systems like Google Data Catalog updated while running the SQL transformation jobs?
This is what this example is all about. It demonstrates the ability to update Data Catalog, in-line with BigQuery stored Procedures using the catalog’s APIs and Remote Functions.
The sample code (main.py and requirements.txt) can be found here. Please notice:
References to <your-tag-template-id> and <your-project-id> on main.py will need to be adjusted according to your GCP environment
Data Catalog Admin role (or similar) will need to be assigned to the service account executing the Cloud Function (by default the compute engine service account) as a tag template values will be updated
A tag template with the structure below exists
Once the Python code is deployed as a Cloud Function, you can create A BigQuery Remote Function using the syntax below:
See a remote function being used to update Data Catalog. The picture below demonstrates how to call the function, passing five parameters to it.
Below you can see how the tag template “BQ Remote Functions Demo Tag Template” gets updated after the function execution.
You can now use this function inside a BigQuery stored procedure performing a full ELT job. In the example below, remote_udf.test_tag table is being updated by the stored procedure and the number of updated rows + total number of rows in table remote_udf.test_tag are being stored in Data Catalog:
Check the full demo video here.
Pub/Sub is used for streaming analytics and data integration pipelines to ingest and distribute data. It’s equally effective as a messaging-oriented middleware for service integration or as a queue to parallelize tasks.
What if you need to trigger an event by posting a message into a Pub/Sub topic via BigQuery SQL? Here is an example:
The sample code (main.py and requirements.txt) can be found here. Please notice:
References to <change-me> on main.py will need to be adjusted according to your GCP environment to reflect your project_id and topic_id
The service account executing the Cloud Function (by default the compute engine service account) will need to have permissions to post a message into a Pub/Sub topic
Once the Python code is deployed as a Cloud Function, you can create A BigQuery Remote Function using the syntax below:
A few screenshots of a remote function being used to post a message into a Pub/Sub topic can be found below:
Vertex AI brings together the Google Cloud services for building ML under one, unified UI and API
What if you need to call online predictions from Vertex AI models via BigQuery SQL? Here is an example:
The sample code (main.py and requirements.txt) can be found here. Please notice:
References to <change-me> on main.py will need to be adjusted according to your GCP environment to reflect your project_id, location and model_endpoint
The service account executing the Cloud Function (by default the compute engine service account) will need to have permissions to execute Vertex AI models. Role “AI Platform Developer” should be enough
Once the Python code is deployed as a Cloud Function, you can create A BigQuery Remote Function using the syntax below:
The example function above will predict penguin weights based on inputs such as species, island, sex, length and other parameters.
A few screenshots of a remote function being used can be found below:
Another common use-case is BigQuery data enrichment by using external APIs to obtain the latest stock price data, weather updates, or geocoding information. Depending on the external service in use, deploy the client code as a Cloud Function and integrate with Remote Functions using the same methodology as the examples covered before.
Here is a screenshot of a remote function example calling an external/public API to retrieve Brazil’s currency information:
A BigQuery remote function lets you incorporate GoogleSQL functionality with software outside of BigQuery by providing a direct integration with Cloud Functions and Cloud Run.
Hopefully this blog sparkled some ideas on how to leverage this super powerful BigQuery feature and enrich your BigQuery data.
Read More for the details.
The first security software-as-a-service (SaaS) solution to be integrated in Azure Virtual WAN, allowing you to protect your workloads with a highly available NGFW.
Read More for the details.
Starting today, you can use Common Access Card (CAC) and Personal Identity Verification (PIV) smart cards to authenticate users into Amazon WorkSpaces through your self-managed Active Directory (AD) and AWS Directory Service AD Connector in the AWS GovCloud (US-East) Region. Additionally, you can now use the AWS Management Console to configure smart card authentication with AWS Directory Service.
Read More for the details.
Starting today, Bring Your Own IP (BYOIP) is available in Asia Pacific (Hyderabad) Region.
Read More for the details.
Today, AWS CloudFormation has expanded the availability of AWS CloudFormation Hooks to the Middle East (Dubai) and Asia Pacific (Jakarta) Regions. With this launch, customers can deploy Hooks in these newly supported AWS Regions to help keep resources secure and compliant.
Read More for the details.
Today, Amazon OpenSearch Service announces Multi-AZ with Standby, a new deployment option that enables 99.99% availability and consistent performance for business-critical workloads. With Multi-AZ with Standby, OpenSearch Service domains are resilient to potential infrastructure failures, such as a node or an Availability Zone (AZ) failure. Multi-AZ with Standby also ensures OpenSearch Service domains follow recommended best practices, simplifying configuration and management.
Read More for the details.
AWS Network Firewall now allows you to override the Suricata HOME_NET variable making it easy to use AWS managed rule groups in firewalls that are deployed in a centralized deployment model. Managed rule groups are collections of predefined, ready-to-use rules that AWS writes and maintains for you. The Suricata HOME_NET variable of the managed rule group has the Classless Inter-Domain Routing (CIDR) range which is inspected by the AWS Network Firewall. Previously, you were unable to override HOME_NET variable as it used the CIDR ranges of VPC where the firewall is deployed. If your firewall uses a central inspection VPC, AWS Network Firewall populates HOME_NET with CIDR ranges of the inspection VPC, instead of the application (spoke) VPCs which you want to filter.
Read More for the details.
Amazon Rekognition content moderation is a deep learning-based feature that can detect inappropriate, unwanted, or offensive images and videos, making it easier to find and remove such content at scale. Starting today, Amazon Rekognition content moderation comes with an improved model for image and video moderation that significantly improves the detection of explicit, violence, and suggestive content. Customers can now detect explicit and violence content with higher accuracy to improve the end-user experience, protect their brand identity, and ensure that all content complies with their industry regulation and policies.
Read More for the details.
Starting today, Bring Your Own IP (BYOIP) is available in Asia Pacific (Hyderabad) Region.
Read More for the details.