Azure – Generally available: Encrypt storage account with cross-tenant customer-managed keys
Azure Storage now supports customer-managed keys using a key vault on a different Azure Active Directory tenant.
Read More for the details.
Azure Storage now supports customer-managed keys using a key vault on a different Azure Active Directory tenant.
Read More for the details.
AWS Launch Wizard now allows you to deploy SAP HANA databases using Amazon FSx for NetApp ONTAP volumes in single node, multi-node, and high availability (HA) deployment patterns.
Read More for the details.
Amazon Textract is a machine learning service that automatically extracts text, handwriting, and data from any document or image. Analyze ID is a specialized API within Textract that extracts data from identity documents, such U.S. Driver Licenses and U.S. Passports. Today, we are pleased to announce updates to our Analyze ID extraction API.
Read More for the details.
Amazon Textract is a machine learning service that automatically extracts printed text, handwriting, and data from any document or image. AnalyzeExpense is a specialized API within Textract that understands the context of invoices and receipts and automatically extracts relevant data such as vendor name and invoice number. Today, we are pleased to announce major enhancements to AnalyzeExpense that include support for new fields and higher accuracy for existing fields.
Read More for the details.
Secure your digital payment system in the cloud with Azure Payment HSM.
Read More for the details.
Amazon RDS Custom for SQL Server is a managed database service that allows administrative access to the operating system. Starting today, RDS Custom for SQL Server provides a simple way to scale your database disk storage as needed. With storage scaling, RDS Custom for SQL Server simplifies the burdensome process of storage configuration as your database size grows.
Read More for the details.
AWS App Runner now supports private services which enables access to App Runner services from within an Amazon Virtual Private Cloud (VPC). App Runner makes it easier for developers to quickly deploy containerized web applications and APIs to the cloud, at scale, and without having to manage infrastructure. By default, App Runner services are accessible publicly over the internet. Now, with private services you can restrict network access to your internal websites, APIs, and applications to originate from within your VPC.
Read More for the details.
Amazon Connect now provides an API to programmatically clear the notifications agents receive after they have missed or rejected a contact and make them eligible to be routed new contacts. Today, if an agent misses a contact, they won’t be routed additional contacts until they acknowledge and clear the missed contact notification. Using this API, businesses can now build custom dashboards, so contact center managers can identify when an agent has missed a contact and make that agent available for additional contacts. This API can also be used to clear similar notifications including when an agent encounters an error with accepting the contact, or is handling After Contact Work.
Read More for the details.
Use Device Update for IoT Hub to publish, distribute, and manage over-the-air updates for everything from tiny sensors to gateway-level devices.
Read More for the details.
Beginning in November, Azure Databricks customers have an additional option for SQL compute, with Azure Databricks SQL Pro, which provides enhanced performance and integration features.
Read More for the details.
All the newly created Azure Front Door, Azure Front Door (classic) or Azure CDN Standard from Microsoft (classic) resources will block any HTTP request that exhibits domain fronting behavior.
Read More for the details.
Amazon Braket, the quantum computing service from AWS, adds support for Aquila, a new neutral atom-based quantum processor from QuEra Computing Inc. As a special purpose device designed for solving optimization problems and the study of quantum phenomena in nature, it enables researchers in industry and academia to explore a new, analog paradigm in quantum computing.
Read More for the details.
Eco-friendly routing is now available to developers in Preview. Improving fuel efficiency can help lower your fleet’s fuel/energy usage and CO2 emissions. Starting today, you’ll have the option to enable eco-friendly routing in your web and mobile experiences.
Eco-friendly routing is a feature of the new Routes API. When you enable eco-friendly routing, you can select the engine type along with other factors such as real-time traffic and road conditions to calculate the eco-friendly route. By default, the Routes API returns a default route, meaning a route without factoring in fuel or energy-efficiency. Now, in addition to the default route in the response, you will also get an eco-friendly route showing the most fuel or energy efficient route based on your vehicle’s engine type.
How Google Maps Platform estimates fuel efficiency
The Routes API estimates fuel and energy efficiency using insights from the US Department of Energy’s National Renewable Energy Laboratory and data from the European Environment Agency. This calculation includes factors that affect your fuel and energy usage and CO2 emissions, such as:
Average fuel or energy consumption based on regionally representative vehicles per engine type (petrol or gas, diesel, hybrid, or electric)
Steepness of hills on your route
Stop-and-go traffic patterns
Types of roads (such as local roads or highways)
The Routes API returns the most fuel or energy-efficient route when one can be found with minimal impact on the arrival time. In cases where fuel or energy savings are too small or increase driving time significantly, the API shows relative fuel or energy savings between routes to help your drivers decide which route to take.
More efficient routes can mean increased driver efficiency, less travel time, and lower fuel consumption. For example, delivery companies or ridesharing companies can use eco-friendly routes to measure estimated fuel consumption and savings for a single trip, multiple trips, or even across their entire fleet to improve their business performance. Eco-friendly routing is available wherever it’s available on Google Maps–and we’re continuing to expand coverage. To get started with eco-friendly routing and Routes API, check out the documentation.
For more information on Google Maps Platform, visit our website.
Read More for the details.
reCAPTCHA Enterprise is Google’s online fraud detection service that leverages more than a decade of experience defending the internet. reCAPTCHA Enterprise can be used to prevent fraud and attacks perpetrated by scripts, bot software, and humans. When installed inside a mobile app at the point of action, such as login, purchase, or account creation, reCAPTCHA Enterprise can block fake users and bots while allowing legitimate users to proceed.
To provide more complete coverage for native mobile iOS and Android applications, we’re announcing the general availability of the reCAPTCHA Enterprise Mobile SDK. Designed with digital-first and mobile-first organizations in mind, the new Mobile SDK fully integrates reCAPTCHA Enterprise’s frictionless experience on end-users’ mobile devices.
Unlike most web applications, iOS and Android apps run on physical devices that can provide a wealth of device telemetry to help identify fraud and bot activity. By combining both device and network signals, the new mobile SDK can better protect native mobile applications from bot attacks while unlocking the full potential of reCAPTCHA Enterprise. It provides:
Frictionless customer experience — no picking fire hydrants from a grid
Easy integration to your native mobile app with support for popular frameworks like Cocoa Pods and Swift Package Manager
A regularly-updated device threat model to help stay ahead of attack evolution
Customers will be able to leverage the new mobile SDK to implement native iOS and Android protection against the OWASP Top 10 automated attacks common on the internet, which include fraudulent account creation, financial hijacking, and credential stuffing. This is particularly important for mobile workforces and end users who use a mobile app to access products and services. Since mobile traffic surpasses web traffic in many industries, it’s even more important to implement a comprehensive mobile app protection strategy to protect against the most prevalent attacks.
If you’re interested in learning more about how to integrate the new Mobile SDK, check out the documentation for iOS and Android. Mobile and Web integrations leverage the same easy to understand pricing for Assessments, found here.
Read More for the details.
Developers love Firestore because of how fast they can build an application end to end. Over 4 million databases have been created in Firestore, and Firestore applications power more than 1 billion monthly active end users using Firebase Auth. We want to ensure developers can focus on productivity and enhanced developer experience, especially when their apps are experiencing hyper-growth. To achieve this, we’ve made updates to Firestore that are all aimed at developer experience, supporting growth and reducing costs.
We’re rolling out the COUNT() function, which gives you the ability to perform cost-efficient, scalable, count aggregations. This capability supports use cases like counting the number of friends a user has, or determining the number of documents in a collection.
For more information, check out our Powering up Firestore to COUNT() cost-efficiently blog.
We’re rolling out Query Builder to enable users to visually construct queries directly in the console across Google Cloud and Firebase platforms. The results are also shown in a table format to enable deeper data exploration.
For more information, check out our Query Builder blog.
Firestore BaaS has always been able to scale to millions of concurrent users consuming data with real time queries, but up until now, there has been a limit of 10,000 write operations per second per database. While this is plenty for most applications, we are happy to announce that we are now removing this limit and moving to a model where the system scales up automatically as your write traffic increases.
For applications using Firestore as a backend-as-a-service, we’ve removed the limits for write throughput and concurrent active connections. As your app takes off with more users, you can be confident that Firestore will scale smoothly.
For more information, check out our Building Scalable Real Time Applications with Firestore blog.
To help you efficiently manage storage costs, we’ve introduced time-to-live (TTL), which enables you to pre-specify when documents should expire, and rely on Firestore to automatically delete expired documents.
For more information, check out our blog: Manage Storage Costs Using Time-to-Live in Firestore
In addition, the following features have been added to further improve performance and developer experience:
Tags have been added to enable developers to tag databases, along with other Google Cloud resources, to apply policy and observer group billing.
Cross-service security rules allow secure sharing of Cloud Storage objects, by referencing Firestore data in Cloud Storage Security Rules.
Offline query (client-side) indexing Preview enables more performant client-side queries by indexing data stored in the web and mobile cache. Read documentation for more information.
Get started with Firestore.
Read More for the details.
Google Cloud is excited to announce our participation in the Supercomputing 2022 (SC22) conference in Dallas, TX from November 13th – 18th, 2022. Supercomputing is the premier conference for High Performance Computing and is a great place to see colleagues, learn about the latest technologies, and meet with vendors, partners and HPC users. We’re looking forward to returning to Supercomputing fully for the first time since 2019 with a booth, talks, demos, labs, and much more.
We’re excited to invite you to meet Google’s architects and experts in booth #3213, near the exhibit floor entrances. If you’re interested in sitting down with our HPC team for a private meeting, please let us know at hpc-sales@google.com. Whether it’s your first time speaking with Google ever, or your first time seeing us at Supercomputing, we are looking forward to meeting with you. Bring your tough questions, and we’ll work together to solve them.
In the booth, we’ll have lab stations where you can get hands-on with Google Cloud labs covering topics ranging from HPC to Machine Learning and Quantum Computing. Come check out one of our demo stations to dive into the details of how Google Cloud and our partners can help handle your toughest workloads. We’ll also have a full schedule of talks from Google, Cloud HPC partners, and Google Cloud users hosted in our booth theater.
Be sure to visit our booth to review our full booth talk schedule. Here is a sneak peak at a few talks and speakers we have scheduled:
Using GKE as a Supercomputer – Louis Bailleul, Petroleum Geo-Services
Google Cloud HPC Toolkit – Carlos Boneti, Google Cloud
Michael Wilde, Parallel Works
Suresh Andani, Sr. Director, AMD
Quantum Computing at Google – Kevin Kissell, Google Cloud
Tensor Processing Units (TPUs) on Slurm – Nick Ihli, SchedMD
Women in HPC Panel – Cristin Merritt, Women in HPC; Annie Ma-Weaver, Google Cloud
DAOS on GCP – Margaret Lawson, Google Cloud; Dean Hildebrand, Google Cloud
There will also be talks, tutorials, and other events hosted by Google staff throughout the conference, including:
Tutorial: Parallel I/O in Practice, Co-hosted by Brent Welch
Exhibitor Forum Talk: HPC Best Practices on Google Cloud, Hosted by Ilias Katsardis
Storage events co-organized by Dean Hildebrand, including:
IO500 Birds of a Feather (List of top HPC storage systems)
DAOS Birds of a Feather (Emerging HPC Storage System)
DAOS on GCP talk in the Intel booth
Keynote by Arif Merchant at the Parallel Data Systems Workshop
Converged Computing: Bringing Together the HPC and Cloud Communities BoF, Bill Magro – Panelist
Ethics in HPC BoFco-organized by Margaret Lawson
Cloud operating model: Challenges and opportunities, Annie Ma-Weaver – Panelist
Google Cloud is also excited to sponsor Women in HPC at SC22, and we look forward to seeing you at the Women in HPC Networking Reception, the WHPC Workshop, and Diversity Day.
If you’ll be attending Supercomputing, reach out to your Google account manager or the HPC team to let us know. We look forward to seeing you there.
Read More for the details.
Editor’s notes: In this guest blog, we have the pleasure of inviting Alok Pareek, Founder & EVP Products, at Striim to share latest experimental results from a performance study on real-time data integration from Oracle to Google Cloud BigQuery using Striim.
Relational databases like Oracle are designed to store data, but they aren’t well suited for supporting analytics at scale. Google Cloud BigQuery is a serverless, scalable cloud data warehouse that is ideal for analytics use cases. To ensure timely and accurate analytics, it is essential to be able to continuously move data streams to BigQuery with minimal latency.
The best way to stream data from databases to BigQuery is through log-based Change Data Capture(CDC). Log-based CDC works by directly reading the transaction logs to collect DML operations, such as inserts, updates, and deletes. Unlike other CDC methods, log-based CDC provides a non-intrusive approach to streaming database changes that puts minimal load on the database.
Striim — a unified real-time data integration and streaming platform — comes with out-of-the-box log-based CDC readers that can move data from various databases (including Oracle) to BigQuery in real-time. Striim enables teams to act on data quickly, producing new insights, supporting optimal customer experiences, and driving innovation.
In this blog post, we will outline experimental results cited in Striim’s recent white paper, Real-Time Data Integration from Oracle to Google BigQuery: A Performance Study.
We used the following components to build a data pipeline to move data between an Oracle database to BigQuery in real time:
Oracle CDC Adapters
A Striim adapter is a process that connects the Striim platform to a specific type of external application or file. Adapters enable various data sources to be connected to target systems with streaming data pipelines for real-time data integration.
Striim comes with two Oracle CDC adapters to help manage different workloads.
LogMiner-based Oracle CDC Reader uses Oracle LogMiner to ingest database changes on the server side and replicate them to the streaming platform. This adapter is ideal for low and medium workloads.
OJet adapter uses a high-performance log mining API to support high volumes of database changes on the source and replicate them in real time. This adaptor is ideal for high volume high throughput CDC workloads.
With two types of Oracle adapters to choose from, when is it advisable to use one over the other?
Our results show that if your DB workload profile is between 20GB and 80GB of CDC data per hour, the LogMiner based Oracle CDC reader is a good choice. If you work with a higher amount of data, then the OJet adapter is better; currently, it’s the fastest Oracle CDC Reader available. Here’s a table and chart that shows the latency (read-lag) for both adapters:
BigQuery Writer
Striim’s BigQuery Writer is designed to save time and storage; it takes advantage of partitioned tables on the target BigQuery system and supports partition pruning in its merge queries.
Database Workload
For our experiment, we used a custom-built, high-scale database workload simulation. This workload, SwingerMultiOps, is based on Swingbench — a popular workload for Oracle databases. It’s a multithreaded JDBC (Java Database Connectivity) application that generates concurrent DB sessions against the source database. We took the Order Entry (OE) schema of the Swingbench workload. In SwingerMultiOps, we continued to add more tables until we reached a total of 50 tables. Each of these tables comprised of varying data types.
We built the data pipeline for our experiment following these steps:
1. Configure the source database and profile the workload
Striim’s Oracle adapters connect to Oracle server instances to mine for redo data. Therefore it’s important to have the source database instance tuned for optimum redo mining performance. Here’s what you need to keep in mind about the configuration:
Profile the DB workload to measure the load it generates on the source database
Redo log sizes to a reasonably large value of 2G per log group
For the OJet adapter, set a large size for the DB streams_pool_size to mine redo as quickly as possible
For an extremely high CDC data rate of around 150 Gb/hour, set streams_pool_size to 4G
2. Configure the Oracle adapter
For both adapters, default settings are enough to get started. The only configuration required is to set the DB endpoints to read data from the source database. Based on your need, you can use Striim to perform any of the following:
Handle large transactions
Read and write data to a downstream database
Mine from a specific SCN or timestamp
Regardless of which Oracle adapter you choose, only one adapter is needed to collect all data streams from the source database. This practice helps to cut the overhead incurred by both adapters.
3. Configure the BigQuery Writer
Use BigQuery Writer to configure how your data moves from source to database. For instance, you can set your writers to work with a specified dataset to move large amounts of data in parallel.
For performance improvement, you can use multiple BigQuery writers to integrate incoming data in parallel. Using a router ensures that events are distributed such that a single event isn’t sent to multiple writers.
Tuning the number of writers and their properties helps to ensure that data is moved from Oracle to BigQuery in real time. Since we’re dealing with large volumes of incoming streams, we configure 20 BigQuery Writers in our experiment. There are many other BigQuery Writer properties that can help you to move and control data. You can learn about them in detail here.
We used a Google BigQuery dataset to run our data integration infrastructure. We performed the following tasks to run our simulation and capture results for analysis:
Start the Striim app on the Striim server
Start monitoring our app components using the Tungsten Console by passing a simple script
Start the Database Workload
Capture all DB events in the Striim app, and let the app commit all incoming data to the BigQuery target
Analyze the app performance
The Striim UI image below shows our app running on the Striim server. From this UI, we can monitor the app throughput and latency in real time.
At the end of the DB workload run, we looked at our captured performance data and analyzed the performance. Details are tabulated below for each of the source adapter types.
The charts below show how the CDC reader lag varies with the input rate as the workload progresses on the DB server.
Lag chart for Oracle Reader:
Lag chart for OJet Reader:
This experiment showed how to use Striim to move large amounts of data in real time from Oracle to BigQuery. Striim offers two high-performance Oracle CDC readers to support data streaming from Oracle databases. We demonstrated that Striim’s OJet Oracle reader is optimal for larger workloads, as measured by read-lag, end-to-end lag, and CPU and memory utilization. For smaller workloads, Striim’s LogMiner-based Oracle reader offers excellent performance. For more in-depth information, please refer to the white paper or contact Striim directly.
Read More for the details.
Editor’s note: Kelsey Hightower is Google Cloud’s Principal Developer Advocate, meeting customers, contributing to open source projects, and speaking at internal and external events on cutting-edge technologies in cloud computing. A deep thinker and a charismatic speaker, he’s also unusually adept at championing a rarely-noticed aspect of software engineering; it’s really emotional stuff.
So how does one become Google Cloud’s Principal Developer Advocate?
A big part of the role is elevating people. I speak and give demos at conferences as well as contribute and participate in Open Source projects, which allows me to get to know a lot of different communities. I’m always trying to learn new things, which involves asking people if they can teach me something, or if we can learn together. I also try to spend a lot of time with customers, working on getting a strong sense of what it’s like to be in different positions in a team and working with our products to solve problems. It’s the best way I know to build trust and help people succeed.
Is this something you can learn, or does it take a certain type of person?
My career is built around learning to make people successful, starting with myself. I left college when I saw the courses were generically sending people up a ladder. I read a test prep book for CompTIA A+, a qualification that gives people a good overview of the IT world. I passed, and got a job and mentor at BellSouth. We’d troubleshoot, learn the fundamentals, and use our imaginations to solve problems. After that I opened an electronics store 30 miles south of Atlanta, making sure I stocked things people really needed, such as new modems and surge protectors anticipating the next lightning storm – I was always thinking about customers’ problems. Weekends I held free courses for people who’d bought technical books. When you teach something, you learn too. My customers and students didn’t have a lot of money, but wanted the best computing experience at the lowest cost possible.
I moved on from there, learning more about software and systems and doing a lot of work in open source Python, Configuration Management, and eventually Kubernetes. A lot of what I’m doing hasn’t changed, on a fundamental level. I’m helping people, elevating people, and learning.
What has doing this work taught you?
Creating good software is very emotional. No, really. I can feel it when I’m doing a live demo of a serverless system, and I point out that there are no Virtual Machines. The audience sighs because the big pain point is gone. I feel it in myself when I encounter a new open source project, and I can tell what it could mean for people – I try to bottle that, and bring that feeling to customer meetings, demos, or whiteboards. It’s like I have a new sense of possibility, and I can feel people react to that. When I’m writing code, I feel like someone does when they’re cooking something good, and you can’t wait for people to taste what they’ve made – “I can’t wait for them to try this code, they are going to love this!”
A few years ago I started our Empathetic Engineering practice, which enables people at Google Cloud to get a better sense of what it’s like for customers to work with our technology. The program has had a lot of success, but I think one of the most important payoffs is that people are happier when they feel they are connecting on a deeper level with the customers.
Read More for the details.
With the unprecedented increase in remote collaboration over the last two years, development teams have had to find new ways to collaborate, driving increased demand for tools to address the productivity challenges of this new reality. This distributed way of working also introduces new security risks, such as data exfiltration — information leaving the company’s boundaries. For development teams, this means protecting the source code and data that serves as intellectual property for many companies.
At Google Cloud Next, we introduced the public Preview of Cloud Workstations, which provides fully managed and integrated development environments on Google Cloud. Cloud Workstations is a solution focused on accelerating developer onboarding and increasing the productivity of developers’ daily workflows in a secure manner, and you can start using it today simply by visiting the Google Cloud console and configuring your first workstation.
Cloud Workstations provides managed development environments with built-in security, developer flexibility, and support for many popular developer tools. Cloud Workstations addressing the needs of enterprise technology teams.
Developers can quickly access secure, fast, and customizable development environments anywhere, via a browser or from their local IDE. With Cloud Workstations, you can enforce consistent environment configurations, greatly reducing developer ramp-up time and addressing “works on my machine” problems.
Administrators can easily provision, scale, manage, and secure development environments for their developers, providing them access to services and resources that are private, self-hosted, on-prem, or even running in other clouds. Cloud Workstations makes it easy to scale development environments, and helps automate everyday tasks, enabling greater efficiency and security.
Cloud Workstations focuses on three core areas:
Fast developer onboarding via consistent environmentsCustomizable development environmentsSecurity controls and policy support
Getting developers started on a new project can take days or weeks, with much of that time spent setting up the development environment. The traditional model of local setup may also lead to configuration drift over time, resulting in “works on my machine” issues that erode developer productivity and stifle collaboration.
To address this, Cloud Workstations provides a fully managed solution for creating and managing development environments. Administrators or team leads can set up one or more workstation configurations as their teams’ environment templates. Updating or patching the environments of hundreds or thousands of developers is as simple as updating their workstation configuration and letting Cloud Workstations handle the updates.
Developers can create their own workstations by simply selecting among the configurations to which they were granted access, making it easy to ensure consistency. When developers start writing code, they can be certain that they are using the right version of their tools.
Developers use a variety of tools and processes optimized to their needs. We designed Cloud Workstations to be flexible when it comes to tool choice, enabling developers to use the tools they’re the most productive with, while enjoying the benefits of remote development. Here are some of the capabilities that enable this flexibility:
Multi-IDE support: Developers use different IDEs for different tasks, and often customize them for their maximum efficiency. Cloud Workstations supports multiple managed IDEs such as IntelliJ IDEA Ultimate, PyCharm Professional, GoLand, WebStorm, Rider, Code-OSS, and many more. We’ve also partnered with JetBrains so that you can bring your existing licenses to Cloud Workstations. These IDEs are provided via optimized browser-based or local-client interfaces, avoiding the latency and challenges of general-purpose remote desktop tools such as latency and limited customization.
Container-based customization: Beyond IDEs, development environments also comprise libraries, IDE extensions, code samples, and even test databases and servers. To help ensure your developers are getting the tools they need quickly, you can extend the Cloud Workstations container images with the tools of your choice.
Support for third-party DevOps tools: Every organization has its own tried and tested tools — Google Cloud services such as Cloud Build, but also third-party tools such as GitLab, TeamCity, or Jenkins. By running Cloud Workstations inside your Virtual Private Cloud (VPC), you can connect to tools self-hosted in Google Cloud, on-prem, or even in other clouds.
With Cloud Workstations, you can extend the same security policies and mechanisms you use for your production services in the cloud to your developer workstations. Here are some of the ways that Cloud Workstations helps to ensure the security of your development environments:
No source code or data is transferred or stored on local machines.
Each workstation runs on a single dedicated virtual machine, for increased isolation between development environments.
Identity and Access Management (IAM) policies are automatically applied, and follow the principle of least privilege, helping to limit workstation access to a single developer.
Workstations can be created directly inside your project and VPC, allowing you to help enforce policies like firewall rules or scheduled disk backups.
VPC Service Controls can be used to define a security perimeter around your workstations, constraining access to sensitive resources, and helping prevent data exfiltration.
Environments can be automatically updated after a session reaches a time limit, so that developers automatically get any updates in a timely manner.
Fully private ingress/egress is also supported, so that only users inside your private network can access your workstations.
“We have hundreds of developers all around the world that need to be able to be connected anytime, from any device. Cloud Workstations enabled us to replace our custom solution with a more secure, controlled and globally managed solution.” — Sebastien Morand, Head of Data Engineering, L’Oréal
“With traditional full VDI solutions, you have to take care of the operating system and other factors which are separate from the developer experience. We are looking for a solution that solves problems without introducing new ones.” — Christian Gorke, Head of Cyber Center of Excellence, Commerzbank
“We are incredibly excited to tightly partner with Google Cloud around their Cloud Workstations initiative, that will make remote development with JetBrains IDEs available to Google Cloud users worldwide. We look forward to working together on making developers more productive with remote development while improving security and saving computation resources.” — Max Shafirov, CEO, JetBrains
Try Cloud Workstations today by visiting your console, or learn more on our webpage, in our documentation or by watching this Cloud Next session. Cloud Workstations is a key part of our end-to-end Software Delivery Shield offering. To learn more about Software Delivery Shield, visit this webpage.
Read More for the details.
Is it already past October? Trust me, this 7-part blog-series journey with you all has been very eventful for me and I had the most fun, experimenting with various Google Cloud databases, serverless options, fun Machine Learning use cases and what not! It was an enthralling Google Cloud Summer spread, just as I thought it would be. In this blog, you will read about an assortment of some of my favorite Google Cloud Database, Storage features and other related services that I didn’t get to cover in the last 6 parts of my blog series “Databases on Google Cloud”.
Prometheus is a multidimensional monitoring and alerting data model with time series data identified by metric name and key/value pairs. Google Cloud Managed Service for Prometheus is Google Cloud’s fully managed multi-cloud solution for Prometheus metrics. Managed Service for Prometheus lets you globally monitor and alert on your workloads, using Prometheus, without having to manually manage and operate Prometheus at scale.
3 reasons why it’s one of my favorite services:
1. Managed Service for Prometheus is built on top of Monarch, the same globally scalable data store used for Google’s own monitoring, and it uses the same backend and APIs as Cloud Monitoring
2. All Cloud Monitoring metric data is queryable using PromQL, and all Managed Service for Prometheus data is queryable using Cloud Monitoring
3. You can use PromQL to query over 1,500 free metrics in Cloud Monitoring, even without sending data to Managed Service for Prometheus
Let’s see a quick overview of accessing Managed Service for Prometheus data in different ways:
1. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project
2. Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project
3. Navigate to Managed Prometheus by searching for it in the Google Cloud Console and let’s view the Storage Metrics of my BigQuery Usage
4. Click Metrics Explorer and click on “Select a Metric”
5. Navigate to BigQuery Dataset in the list that pops up
6. Select Storage from the list of Active Metric Categories list
7. Click Apply
Note: This is as-of-now. An important update is expected to come next month to the Query Language section of Cloud Monitoring and the update will also be published in this blog next month
10. In configuration tab, you can configure some settings including
a. Group by fields
b. Aggregator
c. Alignment Period
d. Legend template etc.
11. In the MQL tab, you can do the same thing but with a query
12. As you can see below, in this demo, I have taken Project Id and Data Set Id as the Group by fields and Sum as the aggregator:
Below screenshot shows Metrics Explorer with MQL tab on with the query above, click Run Query to see the results:
I love this part where you can click a button to land on a page that lets you query both Google Cloud Managed Service for Prometheus metrics and Cloud Monitoring metrics.
Let’s try to do the same monitoring that we tried with the Metrics Explorer option with this one as well:
1. In the tab where you see PromQL Query is where you actually enter the query for this demo
2. In this case, we are calculating the sum of tables grouped by dataset and project
PromQL:
sum by (project_id, dataset_id) (bigquery_googleapis_com:storage_table_count)
The table format result:
There is one favorite feature that I want to call out as part of this section and that is the GROUPS feature inherent to Cloud Monitoring that lets you define and monitor groups of resources, such as VM instances, databases, and load balancers and you can organize resources into groups based on criteria that make sense for your applications.
Imagine all your resources, applications, databases, dependencies and other services being monitored in a single-pane-of-glass-view without having to access multiple reports and maintain several storages. That is what the GROUPS feature of Cloud Monitoring service lets you do.
a. On the monitoring console, click “Groups” on the left
b. Enter your Group Name
c. Select the criteria for your group
d. Click CREATE
In my demo, I have grouped all my services, applications and resources under my project in one GROUP.
e. Once it is created, you should see all your resources in one place
Learn more about Cloud Monitoring Groups, here.
With the new Permissions tab for Monitoring access, you are able to add, edit and delete principals to modify Monitoring access for this project. This way you are able to control Monitoring Admin, Editor and Viewer access for monitoring all your resources and services in one place.
1. Click ‘Permissions’ link on the left bottom section of the Cloud Monitoring console
2. On the right, you would be able to add and remove Permissions with Principals and Roles as shown in the screenshot below
Now that’s all I wanted to discuss about Prometheus, Cloud Monitoring and other related features. For more details on this topic, read the documentation.
We will now switch to the last part of this blog, on a totally different note, an overview of Serverless MongoDB Atlas on Google Cloud.
MongoDB Atlas is a fully managed, global cloud database from MongoDB that combines a flexible JSON‐like data model, rich querying and indexing, and elastic scalability while automating time‐consuming database admin tasks. MongoDB Atlas provides customers a fully managed service on Google’s globally scalable and reliable infrastructure. Atlas allows you to manage your databases easily with just a few clicks in the UI or an API call, is easy to migrate to, and offers advanced features such as global clusters for low-latency read and write access anywhere in the world. We can easily deploy, manage and grow MongoDB on Google Cloud, integrate with Cloud Key Management Service to manage sensitive data, migrate existing MongoDB applications to cloud with other Google Cloud native experience and the very interesting aspect of it all – take advantage of Google’s ML and AI capabilities with the MongoDB application.
You heard it right! As modern application developers, we’re juggling many priorities: performance, flexibility, usability, security, reliability, and maintainability. On top of that, we’re handling dependencies, configuration, and deployment of multiple components in multiple environments and sometimes multiple repositories as well. And then we have to keep things secure and simple. Ah, the nightmare!
This is the reason we love serverless computing. Serverless allows developers to focus on the thing they like to do the most—development—and leave the rest of the attributes, including infrastructure and maintenance, to the platform offerings. However, many serverless models overlook the fact that traditional databases are not managed. You need to manually provision infrastructure (vertical scaling) or add more servers (horizontal scaling) to scale the database. This introduces a bottleneck in your serverless architecture and can lead to performance issues.
MongoDB launched serverless instances, a new fully managed, serverless database deployment in Atlas to solve this problem. With serverless instances you never have to think about infrastructure — simply deploy your database and it will scale up and down seamlessly based on demand — requiring no hands-on management. And the best part, you will only be charged for the operations you run.
You can refer to the blog linked here to create, build and deploy a fully serverless MEAN (MongoDB, Express.js, Angular.js, Node.js) stack application with Google Cloud Run and MongoDB Atlas on Google Cloud.
That–my friends–is our celebratory conclusion of the 7-part blog series. Thank you for taking the journey with me. You can try your hands on the experiments I had included in part of this series and see the consolidated list of database blogs here and reach out if you have specific topics of interest. Here are some links you can reference to learn more about Cloud Monitoring Insights, Query Insights, Prometheus and more:
https://www.youtube.com/watch?v=7BLV24noNGc
https://cloud.google.com/monitoring
https://www.youtube.com/watch?v=VL2Ql0cmo4g&autoplay=1
https://cloud.google.com/products/operations
https://cloud.google.com/managed-prometheus
Please refer to the links below for the all the parts of this blog series:
Databases on Google Cloud: Part 1 – Data modeling basics
Databases on Google Cloud Part 2 – Options at a glance!
Databases on Google Cloud part 3 – Cloud Spanner! & CRUD it with Spring Boot on Cloud Run
Databases on Google Cloud Part 4: Query, Index, CRUD and Crush your Java app with Firestore APIs
Databases on Google Cloud Part 5: Lightweight Application Development with Cloud Functions (Java) and Cloud SQL (SQL Server) in 2 minutes
Databases on Google Cloud Part 6: BigQuery and No-code SQL-only Machine Learning
Read More for the details.