Azure – Generally Available: Azure Dedicated Host – Resize
Resize capability for Azure dedicated hosts is now generally available
Read More for the details.
Resize capability for Azure dedicated hosts is now generally available
Read More for the details.
To our customers:
At Google Cloud, we put your interests first. This means that when you choose to work with us, we become partners on a journey of shared innovation, shared support, and shared fate. We are committed to helping you evolve as technology advances, drawing on our depth of experience to ensure you can use the latest and best technology, while keeping you safe and protected. When it comes to the rapidly developing world of generative AI, this is imperative.
Earlier this year, we embedded Duet AI — an always-on AI collaborator — across our products, from Google Workspace to Google Cloud Platform, and made major advancements to Vertex AI that allow customers to safely, securely, and responsibly experiment and build with generative AI foundation models. We’ve been thrilled to see the innovative use cases you’ve developed from across many industries. We put a lot of thought into how we can instill trust and confidence into these AI offerings, and today we’re pleased to share how we’re addressing one key area of interest for our customers: intellectual property indemnity as it pertains to generative AI.
We’ll explore this complex topic in detail below, but to put it plainly for you, our customers: if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved. To do this we will employ a two-pronged, industry-first approach designed to give you more peace of mind when using our generative AI products. The first prong relates to Google’s use of training data, while the second specifically covers generated output of foundation models.
Taken together, these indemnities provide comprehensive coverage for our customers who may be justifiably concerned about the risks associated with this exciting new frontier of generative AI products. While these indemnities provide powerful protections, we are also committed to maintaining an ongoing dialogue with our customers about other specific use cases that may need coverage.
Let’s look at each of the indemnities in greater detail.
Indemnity 1: Training data
The indemnity for training data used by Google for generative AI models in all our services is actually not a new protection. We have always stood behind all of our services, including generative AI services, by offering a third-party intellectual property indemnity as standard for all customers. However, we’ve heard from many of you that while your company appreciates our general services indemnity, you would like explicit clarification with regards to the training data behind the Google models that those services leverage. We are happy to deliver this reassurance.
Specifically, our training data indemnity covers any allegations that Google’s use of training data to create any of our generative models utilized by a generative AI service, infringes a third party’s intellectual property right.
What does this training data indemnity mean for our customers?
We hope this gives you confidence that your company is protected against third parties claiming copyright infringement as a result of Google’s use of training data. Put simply, regardless of the training data underlying all our services, Google indemnifies you.
Indemnity 2: Generated output indemnity
The generated output indemnity provides you with a second layer of protection, as the generated output is created by our customers in response to prompts or other inputs that they provide to our services. With this second layer of protection, our indemnity obligations now also apply to allegations that generated output infringes a third party’s intellectual property rights.
The generated output indemnity will apply to Duet AI in Google Workspace and to a range of Google Cloud services. As a part of today’s announcement, products covered include:
Duet AI in Workspace, including generated text in Google Docs and Gmail and generated images in Google Slides and Google MeetDuet AI in Google Cloud including Duet AI for assisted application developmentVertex AI SearchVertex AI ConversationVertex AI Text Embedding API / Multimodal EmbeddingsVisual Captioning / Visual Q&A on Vertex AICodey APIs
An important note here: you as a customer also have a part to play. For example, this indemnity only applies if you didn’t try to intentionally create or use generated output to infringe the rights of others, and similarly, are using existing and emerging tools, for example to cite sources to help use generated output responsibly.
What does this generated output indemnity mean for our customers?
The generated output indemnity means that you can use content generated with a range of our products knowing Google will indemnify you for third-party IP claims, including copyright — assuming your company is following responsible AI practices like the ones described above.
What do both indemnities together mean for our customers?
It means that you can expect Google Cloud to cover claims, like copyright infringement, made against your company, regardless of whether they stem from the generated output or Google’s use of training data to create our generative AI models. By offering a two-pronged generative AI indemnity protections, we are providing balanced, practical coverage for relevant types of potential claims. By offering these indemnities on our public service terms page (see below), customers will automatically receive the benefit of these terms without needing to amend their existing agreement.
This is just the first step, and as we continue working together on our shared generative AI journey, we will continue to support you in making sure you can use our services safely, securely and confidently. With protections like these, we hope to give you the assurance you need to get the best out of generative AI for your business.
You can read the exact terms of services here for Google Cloud and here for Workspace.
Read More for the details.
Trends in the data space such as generative AI, distributed storage systems, unstructured data formats, MLOps, and the sheer size of datasets are making it necessary to expand beyond the SQL language to truly analyze and understand your data.
To provide users with more flexibility of coding languages, we announced BigQuery DataFrames at Next ‘23. Currently in preview, this new open source library gives customers the productivity of Python while allowing the BigQuery engine to handle the core processing. Offloading the Python processing to the cloud enables large scale data analysis and provides seamless production deployments along the data to AI journey.
BigQuery DataFrames is a unified Python API on top of BigQuery’s managed storage and BigLake tables. It lets developers discover, describe, and understand BigQuery data by providing a Python compatible interface that can automatically scale to BigQuery sized datasets. BigQuery DataFrames also makes it easy to move into a full production application by automatically creating SQL objects like BigQuery ML inference models and Remote Functions.
This is all done from the new BigQuery DataFrames package which is unified with BigQuery’s user permission model, letting Python developers use their skills and knowledge directly inside BigQuery. A bigframes.DataFrame programming object can be handed off to the Vertex AI SDK and the BigQuery DataFrames Python package is integrated with Google Cloud notebook environments such as BigQuery Studio and Colab Enterprise, as well as partner solutions like Hex, and Deepnote. It can also be installed into any Python environment with a simple ‘pip install BigQuery DataFrames’ command.
Since the large-scale processing happens on the Google Cloud side, a small laptop is enough to get started. BigQuery DataFrames contains two APIs for working with BigQuery — bigframes.pandas and bigframes.ml. In this blog post, we will look at what can be done with these two APIs.
Loosely based on the open source pandas API, the bigframes.pandas API is primarily designed for exploratory data analysis, advanced data manipulation, and data preparation.
The BigQuery DataFrames version of the pandas API provides programming abstractions such as DataFrames and Series that pandas users are familiar with. Additionally, it comes with some distinctions that makes it easier when working with large datasets. The core capabilities of bigframes.pandas today are:
Unified data Input/Output (IO): One of the primary challenges data scientists face is the fragmentation of data across various sources. BigQuery DataFrames addresses this challenge head-on with robust IO methods. Irrespective of whether the data is stored in local files, S3, GCS, or others, it can be seamlessly accessed and incorporated into BigQuery DataFrames. This interoperability not only facilitates ease of access but also effectively breaks down data silos, enabling cohesive data analysis by making disparate data sources interactable within a unified platform.
Data manipulation: Traditional workflows often involve using SQL to preprocess large datasets to a manageable size for pandas, at times losing critical data nuances. BigQuery DataFrames fundamentally alters this dynamic. With access to over 200 pandas functions, data scientists can now engage in complex operations, like handling multi-level indexes and ordering, directly within BigQuery using Python.
Seamless transitions back to pandas: A developer can use bigframes.pandas for large scale processing and getting to the set of data that they want to work with and then move back to traditional pandas for refined analyses on processed datasets. BigQuery DataFrames allows for a smooth transition back to traditional pandas DataFrames. Whether for advanced statistical methodologies, ML techniques, or data visualization, this interchangeability with pandas ensures that data scientists can operate within an environment they are familiar with.
Large-scale ML training: The ML API enhances BigQuery’s ML capabilities by introducing a Python-accessible version of BigQuery ML. It streamlines large-scale generative AI projects, offering an accessible interface reminiscent of scikit-learn. Notably, BigQuery DataFrames also integrates the latest foundation models from Vertex AI. To learn more, check out this blog on applying generative AI with BigQuery DataFrames.
Scalable Python functions: You can also bring your ML algorithms, business logic, and libraries by deploying remote functions from BigQuery DataFrames. Creating user-developed Python functions at scale has often been a bottleneck in data science workflows. BigQuery DataFrames addresses this with a simple decorator, enabling data scientists to run scalar Python functions at BigQuery’s scale.
A full sample provided here.
Vertex AI integration: Additionally, BigQuery DataFrames can provide a handoff to Vertex AI SDK for advanced modeling. The latest version of the Vertex AI SDK can directly take a bigframes.DataFrame as input without the developer having to worry about how to move or distribute the data.
Hex’s polyglot support (SQL + Python) provides BigQuery with more ways to work with BigQuery data. Users can authenticate to their BigQuery instance and seamlessly transition between SQL & Python.
When connected to a Deepnote notebook, you can read, update or delete any data directly with BigQuery SQL queries. The query result can be saved as a dataframe and later analyzed or transformed in Python, or plotted with Deepnote’s visualization cells without writing any code. Learn more about Deepnote’s integration with BigQuery.
“Analyzing data and performing machine learning tasks has never been easier thanks to BigQuery’s new DataFrames. Deepnote customers are able to comfortably access the new Pandas-like API for running analytics with BigQuery DataFrames without having to worry about dataset size.” —Jakub Jurovych, CEO, Deepnote
Watch this breakout session from Google Cloud Next ‘23 to learn more and see a demo of BigQuery DataFrames. You can get started by using the BigQuery DataFrames quickstart and sample notebooks.
Read More for the details.
The General Data Protection Regulation (GDPR) sets out specific requirements for businesses and organisations who are established in Europe or who serve users in Europe. It regulates how businesses can collect, use, and store personal data.
At Google Cloud, we prioritise the security and privacy of your data, and we want you, as a reCAPTCHA Enterprise customer, to feel confident using our services in light of GDPR requirements. As a reCAPTCHA Enterprise customer, we support your GDPR compliance efforts by:
Committing in our contracts to process your customer personal data in reCAPTCHA Enterprise only as you instruct us, and to comply with our obligations under GDPR in relation to that processing;Giving you the documentation and resources to assist you in your privacy assessment of our services; and continuing to evolve our capabilities as the regulatory landscape changes.
reCAPTCHA Enterprise has been compliant with GDPR since reCAPTCHA Enterprise became available in 2019. When you use reCAPTCHA Enterprise, you can have confidence in the following:
Any customer data put into our systems will only be processed in accordance with the customer’s instructions, as described in our GDPR-updated Google Cloud Data Processing Addendum and the reCAPTCHA Enterprise Service Specific Terms. Hardware and software information collected through reCAPTCHA Enterprise (such as device and application data) is only processed as necessary to provide, maintain, and improve the service, and for general security purposes. That information will not be used for any other purpose, and it is not used for personalised advertising by Google.
The Google Cloud Data Processing Addendum clearly articulates our privacy commitment to customers. We have evolved these terms over the years based on feedback from our customers and regulators. We specifically updated these terms to reflect GDPR requirements for agreements between companies that process personal data on their customers’ behalf, and to facilitate our customers’ compliance assessment and GDPR readiness when using Google Cloud services. Our customers can enter into these updated data processing terms via the opt-in process described for the Google Cloud Data Processing Addendum.
Learn more about Google Cloud and GDPR. If you have questions about getting started with reCAPTCHA Enterprise, please contact us.
Read More for the details.
Deutsche Bank has provided a wide range of financial services to corporations, governments, institutional investors, and individuals across the world since 1870. To better serve and protect our customers, part of our cloud transformation journey required software engineers to build a secure, scalable, and reliable certificate management system for cloud workloads.
Deutsche Bank partnered with Google Cloud Professional Services to efficiently and securely manage the encryption of data in transit for hundreds of the company’s applications. That was no small feat, since the company’s business-critical applications require tens of thousands of new certificates for network communication encryption each day.
Digital identities are of utmost importance in securing infrastructures and applications, and X.509 certificates are a common way to represent them. They have been widely used to help secure network communication, email, and general purpose encryption. Widespread adoption of service-oriented solution architectures has expanded the number of entities that can be secured through certificates. Each digital certificate has a finite lifespan and has to be renewed before expiry to function properly.
While the industry is moving to shorter certificate lifetimes to better manage security risks, doing so increases the frequency of the renewal process. Failure to renew a certificate, to revoke an unused or compromised certificate, or to distribute trusted certificates can have severe consequences including service outages and security breaches.
Google Cloud Certificate Authority Service (CAS) is a cornerstone for extending on-premises public key infrastructure (PKI) services to cloud workloads. With its ability to serve as a fully-managed central authority for all enterprise workloads running in public clouds, CAS brings the central component for enterprise certificate management from on-prem to the cloud and provides convenient tools and services for an automated and integrated solution that complements CAS integrations.
Having the ability to issue custom organization certificates in a scalable, secure, on-demand way is only the beginning of our certificate management story. With the dozens of managed services that Google Cloud offers, the use of CAS certificates needs to expand beyond current use-cases to protect communication between cloud and on-prem system components.
With that, we come to the need for a centralized, automated custom certificate lifecycle management solution that can help remove the burden of cloud services configuration from the application developer teams, centralizes those tasks, and automates it to be manageable by a finite-sized team.
Enterprises have various organizational and regulatory requirements and business preferences in regard to the certificates. As a heavily security-focused and regulated industry, we have further goals to address. Here are some of the most important ones:
Operation
Automate as much as possible to reduce cost, and minimize availability and security risks.Make certificate management transparent for applications, so that the application teams don’t need to worry about certificate expiration and renewal.
Security
Ensure highest level of client trust for internet facing applications. Protect the brand and the users by securing applications through certificates that have been issued by the public certificate authorities (CA) using the most strict identity verification steps.Establish discrete trust boundaries for production and non-production environments. To avoid data breaches and establish stronger security in production environments, different trust anchors must be created to issue certificates.Ensure application domain ownership validation for certificate issuance. Enterprises have thousands of applications and services running in their landscape. They need to control which subdomains are allowed to be used by the applications. That is, any certificate request of an application for an unauthorized subdomain must be rejected.
Governance
Ensure that certificates issued by only approved CAs are used. Enterprises have policies to regulate the cryptographic services used within the organization to help them manage trust and to reduce risk.Monitor certificates issued in the organization. Keeping inventory of certificates with all related information about their owners, location and algorithms in use helps reporting and identifying impact and response to the current and future cryptography related threats. For example, future threats indicated by the post quantum revolution.Ensure that specific certificate types are in use where they are required by regulators. For example, EU directive for Electronic Payment Services (PSD2) enforces the use of QWAC and QSEAL certificates issued by the trust service providers defined in the European Union eIDAS Regulation
Automation is crucial for effectively managing so many certificates.
Our certificate management solution uses CAS and several other Google Cloud services to address our challenges. The figure below depicts its high-level architecture.
This solution can help with several use cases:
Initial enrollment and certificate provisioning: Provision a new certificate with a valid identity (Subject, Subject Alternative Name) compliant with the organization security policy requirements through IaaC.Certificate content update: Change the attributes (Subject Alternative Name) of an existing certificate.Certificate renewal: Provision a new re-validated certificate before the expiration of the existing certificate and replace the expiring certificate with the newly created certificate automatically without the application team intervention.Certificate revocation: Revoke existing certificate to invalidate and eventually stop data exchange over network connections protected with this certificate, which could occur as a result of certificate key compromise or loss.Trust Anchor management: Provide golden up-to-date source for Trust Anchors (approved root CA certificates) for services and applications which are making certificate validation. Distribute and store trusted CA certificates in the client and server applications so that they can validate TLS certificates.Certificate Authority renewal: Certificate Authority certificates can expire after a certain period of time and need to be renewed. Following industry best practice, the lifetime of the CAs is becoming shorter.
There are multiple benefits that organizations should start to see once the central certificate management solution is in place.
At Deutsche Bank, there’s a dedicated security team that runs and centrally manages all certificates in the organization. Given the size of Deutsche Bank, that wouldn’t have been possible without automation of routine processes and compliant PKI services. Improving the process by eliminating manual efforts and managing certificates automatically has reduced costs, and also decreased the likelihood of application outages. Since we deployed the solution to production, we haven’t encountered application incidents due to certificate expiration.
High availability of applications, especially for the internet facing services, is of a great importance for the enterprises due to reputation, cost and regulation compliance. Using Google Cloud services to build certificate management automation has not only resulted in a modern, reliable future proof solution, but also met natively the high availability and scalability requirements without additional efforts.
Our solution makes data in transit encryption straightforward for the application teams and supports Deutsche Bank’s overall defense-in-depth approach. It ensures secure key creation with high entropy and prevents usage of weak keys. Moreover, these highly sensitive cryptographic assets are encrypted (not exposed in plaintext) throughout the process, apart from the owning application or resource. We can now also reliably reduce the lifetime of certificates.
Certificate monitoring, control, and responsibility is owned by a team who concentrate and excel in the difficult discipline of their subject matter expertise. Every issued internal certificate that has ever been in use in a Deutsche Bank cloud environment is recorded and kept track of in a central, monitored location in CAS.
With the help of core Google Cloud services such as Cloud Key Management Service and Certificate Authority Service, we could scale the certificate management process to the required levels with minimal resources. We started adopting CAS for our cloud infrastructure while it was still in its beta stage, and succeeded in building a reliable certificate management platform in the Google Cloud and achieving our goals.
Read More for the details.
Do you want to migrate your VMware workloads to Google Cloud but aren’t sure how to back up and protect your VMs affordably and reliably? We recently announced GCVE Protected, a new Google Cloud offering that offers bundled pricing for both Google Cloud VMware Engine and Google Cloud’s Backup & DR Service. With GCVE Protected, you can protect all your virtual machines on a VMware Engine node with our first-party backup and DR software for only an incremental add-on cost per VMware Engine node, giving you centralized, fast, and cost-efficient backup and recovery capabilities for your VMware Engine VMs.
GCVE Protected offers:
Dedicated, managed VMware private-cloud-as-a-service that is available globally, and sold and supported directly through Google CloudBackup management: Centrally managed, fast, and cost-efficient incremental-forever backupsDisaster recovery: Recover VMs across regions and projectsRansomware recovery: Instantly access different point-in-time backup copies in parallel, to quickly identify a desirable recovery candidateTest data management: Reduce time-to-test new patches and features, and run business reporting and analytics against production data. Reduce management burden with self-service auto refresh of data
Why should you use GCVE Protected?
A Google Cloud solution – Built for Google Cloud, this backup and DR service integrates with the Google Cloud ecosystem including simplified sign-on via SSO, seamless integration with Cloud IAM, Cloud Logging, Monitoring, Key Management, and more.Centralized management – You can manage backups for various Google Cloud and hybrid workloads all from one place.Low TCO – Efficient, incremental-forever backup leveraging changed-block tracking significantly reduces the time it takes you to back up, minimizes impact on production servers, and optimizes bandwidth and storage utilization to lower costs.Achieve lower RTO – Instantly mount and access VMware Engine VMs from backups stored in Cloud Storage. No need to first move backup data to warm storage to access it.Simplified pricing – GCVE Protected is offered at a fixed price per node, regardless of the amount of data you protect or the number of VMs, simplifying pricing and offering better cost predictability.
To get started, you just need to:
Sign up for a Google Cloud account.Create a VMware Engine cluster.Enable the Backup and DR service for your cluster.Start backing up your VMs.
For more information, please visit Google Cloud VMware Engine – Protected.
Read More for the details.
Modern applications are often highly distributed, with microservices and APIs running across many environments, including multiple clouds. This approach offers many benefits, such as resilience, scalability, and faster team velocity. However, this approach can also introduce latency concerns. In this blog, we highlight how Google Cloud Consulting (GCC) and Snap collaborated to reduce latency by 96% for Snap’s “User Service” microservice, finally allowing them to perform data analysis of their multi-cloud estate with acceptable performance.
Snap has a multicloud architecture, with operational database workloads in both AWS and Google Cloud, leveraging services including AWS DynamoDB as well as Google Cloud Spanner, Firestore and Bigtable. In this particular case they had the operational database in AWS (DynamoDB) and an analytical database (BigQuery) in Google Cloud. However, latency between the two systems was unacceptably high.
To help, Snap decided to supplement its User Service, a microservice that provides an interface for retrieving user data, with a User Service hosted in Google Kubernetes Engine (GKE), and a KeyDB cache, an in-memory, open-source, and fast alternative to Redis, owned by Snap.
KeyDB, hosted in Google Cloud, caches frequently requested data to avoid repetitive cross-cloud calls and minimize latency. Before implementing KeyDB, the average P99 latency between Google Cloud us-central1 region and AWS us-east-1 region was between 49-133ms. With KeyDB in Google Cloud, every cache hit resulted in a tiny fraction of the original latency — between 1.56-2.11ms. The non-cached data is stored in the Dynamo database.
The proposed multi-cloud architecture
Below is the description of the data retrieval flow.
Client services deployed on Google Cloud in the us-central1 region call the User Service, also deployed in the us-central1 region. The specific Google Cloud user service implementation shown on the diagram is a CachingSecondaryUserService.The CachingSecondaryUserService attempts to retrieve data from the KeyDB cache. KeyDB is also deployed in the us-central1 Google Cloud region on another GKE cluster. While the data for some users may exist in the cache, resulting in a cache hit (1b), data for other users may be missing, resulting in a cache miss. The missing data is retrieved from User Service implementation deployed in the us-east-1 AWS region (see path 1a).On retrieval, CachingSecondaryUserService backfills the cache (2a). That way subsequent requests can “hit” the cache, reducing cross-cloud latency.
The cache is invalidated using two methods:
Cache in the KeyDB is invalidated by the TTL settings for the KeyDB every 24 hours (3b)Using the SecondaryServiceCacheInvalidator that is triggered when data needs to be updated, e.g., when data is written to the Dynamo database using other services (3a)
Below is the screenshot from Snap’s Grafana monitoring tool that shows latency for the cache “hit” path for one of the data points.
To the left of the Grafana screenshot, you can see that the retrieval latency prior to introducing the KeyDB caching solution averages just below 50ms. At 19:30 the caching solution is implemented, with the first call filling the cache, and every subsequent call retrieving from the cache on a cache hit. Below is the table summarizing results:
“Partnering with Google to integrate KeyDB caching within Google Cloud has opened the doors for future products and teams to deploy to Google Cloud with drastically reduced latency concerns,” said Vinay Kola, Software Engineering Manager, Snap. “This solution has provided us the ability to maintain a cloud-agnostic presence, as well as introduce new applications to Google Cloud that were previously unable or bottlenecked due to latency or reliability concerns.”
By identifying Snap’s business challenges and implementing technical solutions that solve these challenges and support their multi-cloud architecture, we are enabling Snap to grow their data analytics presence on Google Cloud. Indeed, Snap plans to replicate a similar architecture for data access from other Google Cloud regions. For the broader audience, other in-memory databases such as Memorystore or Redis, could be used as a caching storage.
If you’re looking for a way to improve the performance of your applications, contact Google Cloud Consulting today. We can help you design, build, and manage your cloud infrastructure, and we have a proven track record of helping businesses of all sizes achieve their goals with Google Cloud.
Special thanks to Paul Valencia, Technical Account Manager, for his contributions to this blog post.
Read More for the details.
Starting November 2023, Virtual Machine Scale Sets created on PowerShell and Azure CLI will automatically default to Flexible orchestration mode.
Read More for the details.
Link Your Bank Account allows you to add a bank account immediately after becoming an AWS customer. New customers with a Germany billing address can now pay AWS invoices with a bank account that supports the Single Euro Payment Area (SEPA) standard. Prior AWS payments are no longer required to add SEPA direct debit payment method.
Read More for the details.
Search Pipelines, a new feature in OpenSearch 2.9, make it easy to build query and result processing pipelines. This lets you build search query and result processing as a composition of modular processing steps without complicating your application software.
Read More for the details.
Customers can now utilize Azure Backup for AKS to recover their containerized applications and data during a regional disaster scenario.
Read More for the details.
Starting today, we are introducing a new Amazon CloudWatch metric called Attached EBS Status Check to monitor if one or more Amazon EBS volumes attached to your EC2 instances are reachable and able to complete I/O operations. With this new metric, you can now quickly detect and respond to any EBS impairments that may potentially be impacting the performance of your applications running on Amazon EC2 instances.
Read More for the details.
Customers can now create Amazon FSx for NetApp ONTAP file systems in the AWS Asia Pacific (Osaka) Region.
Read More for the details.
You can now use AWS Systems Manager Application Manager to perform operational activities with SAP HANA databases in addition to command line interfaces. AWS Systems Manager for SAP also now supports highly available SAP HANA deployments.
Read More for the details.
General availability enhancements and updates released for Azure SQL in early-October 2023.
Read More for the details.
Provision up to 10 read replicas in universal regions on Azure Database for MySQL – Flexible Server.
Read More for the details.
Use Azure Private Link for private connectivity with MySQL – Flexible Server.
Read More for the details.
Now you can easily leverage your data stored in Azure Cosmos DB for Mongo DB vCore for Retrieval Augmented Generation (RAG) with Azure OpenAI models using the "Use your data" feature in Azure OpenAI Studio.
Read More for the details.
A Zero Trust approach to security can help you safeguard your users, devices, and apps as well as protect your data against unauthorized access or exfiltration.
As part of Google Cloud’s efforts to help organizations adopt Zero Trust, we designed our BeyondCorp Enterprise (BCE) solution to be an extensible platform enabling customers to use a variety of signals from Chrome, desktop operating systems, and mobile devices. BeyondCorp Enterprise, Workspace CAA, and Cloud Identity can now receive critical Android device security signals for both advanced managed devices and, for the first time, basic managed devices.
For example, a customer can now define a rule to block access on devices that have potentially harmful apps installed or have been tampered with (such as if it had been rooted). These signals will be made available in the Workspace Admin Console device management UI, and in the Cloud Identity Devices API, enabling admins to gain observability into the state of devices accessing private apps, SaaS apps, or Workspace data.
Context-Aware Access is a security feature that allows admins to deploy granular control policies to enforce user access based on IP address, device posture, time of day, etc. We support five device attributes (screen lock, OS version, encryption, company-owned, and verified boot) for Android devices in basic Access Level mode, with additional device attributes in advanced mode.
The below information highlights the new signals we have added based on customer demand:
Signal Details
Attribute: Verified Boot
Type: boolean
Description: Verified Boot strives to ensure all executed code comes from a trusted source (usually device OEMs), rather than from an attacker or corruption. It establishes a full chain of trust, starting from a hardware-protected root of trust to the bootloader, to the boot partition and other verified partitions including system, vendor, and optionally OEM partitions.
Attribute: Potentially harmful apps
Type: boolean
Description: Google Play Protect checks apps when installed. It also periodically scans devices. This will flag if the device has deployed any apps that are potentially harmful or if an existing app has now been categorized as potentially malicious.
Disallow devices with potentially harmful apps detected.
Attribute: Google Play Protect
Type: boolean
Description: Require devices to have Google Play Protect Verify Apps enabled. This flag ascertains if the Google Play Protect is enabled for the device. Google Play Protect automatically scans all of the apps on Android phones and works to prevent the installation of harmful apps.
Attribute: CTS Compliance Check
Type: boolean
Description: The SafetyNet Attestation API provides a cryptographically signed attestation, assessing the device’s integrity. This flag attests that the device is a certified, genuine device that passes CTS.
In addition to CAA rules, you can get visibility of device state including these new signals across your fleet. These additional states are available via APIs as well as in the Admin console. You can see a screenshot of admin console updated with the new signals on device detail page:
Potentially_harmful_apps details are provided in the Installed apps section from the Installed apps field of the Admin Console as shown above
We are continuing to add additional signals from Chrome browser, Chrome OS, mobile as well as partners. Reach out to your Google representative if you would like to see additional partners or signals added.
If you’d like to learn more, visit the BeyondCorp Enterprise webpage. You can also follow the steps for setting up CAA rules with BeyondCorp Enterprise here. You can find additional information for all the Android signals here.
Read More for the details.
Can you actually put generative AI to use at work? Or is it just good for half-baked demos that we play with for the sake of amusement?
In banking, retail, and entertainment, just to name a few industries, AI can give you a different angle on your operational data or increase productivity by offloading certain types of work. A lot of routine work traditionally done by people can be simplified and streamlined by the new technology. And generative AI has become much easier to build into applications, now that models are readily available and can be used directly from the tools you already know, whether they’re developer tools or the database’s native interface. Let’s take a look at an example of using AlloyDB Omni, a PostgreSQL-compatible relational database, with the AlloyDB AI capabilities we recently announced in preview.
Let’s say you’re selling online and want to add extra info to data to give your customers a better experience, provide more details about your products, or add a quick summary of a product description. This isn’t hard for a small number of items, but it can be a lot of work for thousands of products or titles in an inventory. How can you improve the process and make it more efficient? Sounds like a great use case for generative AI.
What if I told you that you can call AI models from your database to achieve these goals? And that with AlloyDB, you can do this in your own data center or any cloud using a downloadable version – AlloyDB Omni? Yes, you can make these calls directly from the database using AlloyDB AI, a set of generative AI capabilities in AlloyDB. Let me explain how it works.
In our example, we have our AlloyDB Omni database as a backend for a rental and streaming service where one of the tables – called titles – represents a list of available shows. The table has columns called title and description. The descriptions are rather short and we’d like to expand the descriptions to give more details about each show. We are going to use one of our Vertex AI foundation models for generative AI – the “text-bison” Large Language Model (LLM) – to write the expanded descriptions, and we’ll call the model directly from the AlloyDB Omni database using our title and original description as a prompt.
We start from the deployment of AlloyDB Omni. The process of installing and setting it up is easy and thoroughly described in the documentation so we don’t need to repeat it here. But before running the final “database-server install” command we need to take some extra steps to enable the Vertex AI integration.
From the high-level point of view the AlloyDB Omni instance has to be able to call the Vertex AI API and it requires authentication and authorization in Google Cloud. The steps are described in the AlloyDB Omni documentation.
You need a service account in the Google Cloud Project where the Vertex AI API is enabled.Then you grant permissions to the service account to use Vertex AI.You create a service account key in JSON format and store it on the AlloyDB Omni database server.Then you can use the key location in your alloydb cli “database-server install” command to enable Vertex AI integration for your instance.
Here’s a high level diagram of the architecture.
AlloyDB Omni ML integration.
Once we have integration with Vertex AI enabled we can use either custom or pre-built foundation Vertex AI models in our application.
Let’s go back to our example. To run the demo queries, I’ve loaded some movies and tv-shows titles to my sample AlloyDB Omni database. The table “titles” has a title column with the name of the show and a brief description in the column description for a movie or show. Here is what we have as the original description for “Pinocchio”.
For my website, I want more elaborate descriptions for each movie or show. To achieve this, I create an additional column and fill it with data generated by Vertex AI based on a prompt to the Google “text-bison” model using the title and description columns as I’ve described earlier.
The prompt itself is simple enough and would be along the lines of “Can you create a summary for the <column title value> based on the following description – <column description value>?”. The prompt is used as the second argument in the “ml_predict_row” function. The first argument is the endpoint for the Vertex AI model, i.e. the location of the model on Google Cloud. In our case it is “publishers/google/models/text-bison”.
And here is the result returned by the function for the “Pinocchio”.
We have more information and it might be enough, but as you probably know, the prompt determines everything in generative AI. What if we ask about something more elaborate? Here’s a slightly modified prompt where I’ve added the word “elaborate” and the result.
This output reveals almost the whole plot of “Pinocchio”, so we might need to be more careful. The model accepts a number of parameters, such as temperature, which shows how relevant the response should be to the request and some other model specific parameters. You can read more about the “text-bison” model and parameters in the documentation.
This is a pretty simple example of how AlloyDB AI can help make some tasks much faster, but you can do much more to achieve your business requirements and implement your ideas. Vertex AI has a set of foundation models that can be used with different types of data and returned values. For example, it has models for vector search of similar values and models for working with images. You can read much more in the Vertex AI documentation.
As an exercise, you can bring your own data and tune one of the foundation models for your case using the tutorial embedded into the Vertex AI documentation. I recommend starting with the Tune Foundation Model tutorial.
Vertex AI foundation model tuning.
If you’re new to AlloyDB, you may be eligible for a free trial of the cloud-based version. Alternatively you can download the AlloyDB Omni free developer edition. To get started, go to this page and choose the free developer edition.
Read More for the details.