Azure – Azure SQL—General availability updates for late-March 2023
General availability enhancements and updates released for Azure SQL in late-March 2023
Read More for the details.
General availability enhancements and updates released for Azure SQL in late-March 2023
Read More for the details.
Azure HDInsight for Apache Kafka 3.2.0 now available for public preview and ready for production workloads.
Read More for the details.
Log all connections to your cache.
Read More for the details.
Storage in-place data sharing with Microsoft Purview public preview is now supported for Azure Data Lake Gen2 and Blob storage accounts in East US, East US2, North Europe, Southcentral US, West Central US, West Europe, West US, West US2
Read More for the details.
Amazon DataZone is a new data management service to catalog, discover, analyze, share, and govern data across organizational boundaries. Visibility of and access to data are key drivers of innovation and value for business. To provide visibility and access between organizations, Amazon DataZone creates a usage flywheel. The flywheel is driven by data producers, who securely share data and its context, and data consumers, who find answers to business questions in the data.
Read More for the details.
Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Today, we are announcing that Amazon Translate asynchronous Batch Translation is now available in eight additional regions – US West (Northern California), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Paris) and Europe (Stockholm). With this expansion, Amazon Translate asynchronous Batch Translation is supported in 15 regions. Now, AWS customers can reach a wider set of users in many geographies that are increasingly expecting to consume media and interact with organizations in the language of their choice.
Read More for the details.
Starting today, Amazon Elastic Compute Cloud(Amazon EC2) C6in instances are available in Europe (Stockholm), Middle East (Bahrain), Asia Pacific (Jakarta, Mumbai, Sydney), Africa (Cape Town), South America (Sao Paulo), Canada (Central), and AWS GovCloud (US-East) Regions. These instances are powered by 3rd Generation Intel Xeon Scalable processors with all-core turbo frequency of up to 3.5 GHz, and are the first x86-based Amazon EC2 instances to offer up to 200 Gbps network bandwidth.
Read More for the details.
Last year at Enterprise Connect 2022, Google Cloud doubled down on its commitment to the contact center, with the addition of end-to-end platform capabilities to Contact Center AI (CCAI). We’ve been pleased to see CCAI help companies reimagine customer and agent experiences. For example, Segra, one of the largest independent fiber infrastructure bandwidth companies in the eastern U.S., uses CCAI to orchestrate flows when customers use self-help resources, and to expand customer support to new channels. These efforts have helped their customers get more seamless and complete answers. With CCAI, Segra has improved customer and agent experiences with a 41% decrease in average handle time (AHT) and a 62% decrease in abandonment rates.
The impact we create for customers is being recognized more broadly. Google Cloud was recently named a Leader in the 2023 Gartner MQ Conversational AI for Enterprises. This praise is a testament to our robust investments in the space and commitment to the future of conversational AI.
We’re thrilled to continue this momentum with our recent generative AI announcements, including bringing our new Generative AI App Builder to CCAI customers. Unveiled earlier this month, Gen App Builder lets organizations leverage foundational models, conversational AI, and search technologies to quickly and easily create multimodal chatbots and personalized self-serve customer experiences. In addition to generative AI capabilities, we’re also bringing new workforce management tools, expanding the ways human agents can support customers. In this article, we’ll explore how CCAI’s growing capabilities can help your company deliver outstanding customer support.
“Generative AI is top of mind for most businesses today looking to improve experiences for their customers.” says David Seybold, Chief Executive Officer at TTEC Digital. ”Our TTEC clients are already seeing the power of Google Cloud CCAI. We are on a path to accelerate the transformation of existing CX technology stacks and deliver on the last mile of the customer journey through our associates who curate knowledge and provide the quality assurance and data annotation needed for responsible, impactful use of Google’s Generative AI App Builder.”
Many calls to the contact center are from consumers looking for information spread across a company’s website, product manuals, support FAQ docs, and other materials that are available to the support agents. With Gen App Builder, you can build a bot within minutes to answer these inquiries. Just point the product at relevant sources—like the URL of a website, a manual, or a repository of websites—and with only a few clicks, a customized chatbot will ready for deployment across a variety of channels, from your contact center, website, or app to customer experience channels or even popular messaging apps. Moreover, with CCAI, calls from customers are automatically matched and routed to the virtual agents. Live agents also benefit by having answers to customer questions automatically surfaced to them during a live call, improving the experience for customers regardless of channel.
With Gen App Builder, you can build bots that not only provide helpful information, but also let customers make payments, process returns, and execute other transactions. It adds three new ways to easily and quickly build and deploy bots and virtual agents that enable more magical and personalized customer interactions:
Choose a prebuilt component. Handling common tasks that connect to your data is simple. Easily drag building blocks and prebuilt flows onto the canvas for common use cases, such as making a payment or checking order status, then fill out a form to connect to data sources, and that’s it! The bot is ready to deploy.
Compose your business logic graph. Support complex tasks with simple design tools that let you map your entire bot in minutes, creating a visual representation of how states and tasks relate to each other. You don’t need to think about the conversational details, which are handled by the AI.
Write instructions in natural language. Simply explain what the task does, which information it should collect, and which APIs it should send that information to—the AI takes care of all the rest.
CCAI customers like Bell Canada are excited to leverage these new capabilities to improve customer experiences. “Bell is excited to explore generative AI and its applications across personalization, marketing, and customer service,” says Michel Richer, Vice President, Data Engineering and Artificial Intelligence at Bell. “We look forward to building on our work over the past few years using conversational analytics, with CCAI as a key driver for customer experience improvement. Bell strongly believes that the developments in generative AI are going to be transformative and is looking forward to innovating and partnering in this space with Google Cloud.”
Google Cloud is not only reimagining the customer experience with more helpful virtual agents, but also improving the experience for human agents. Workforce Management (WFM) capabilities are now delivered with Google Cloud CCAI, via our partner, UJET. WFM’s out-of-the-box features help agents to anticipate customer demand, produce more accurate forecasting, and plan more effectively. Contact center managers can optimize staffing across voice and digital channels, all while managing the complexities of a remote or distributed workforce. They can also eliminate manual forecasting in spreadsheets, thanks to an intuitive user interface built for speed.
We are also happy to provide customers with WFM choice by announcing the availability of native integration support for Verint, a veteran in the WFM space. Google Cloud’s CCAI and Verint’s Customer Engagement AI are brought together with a seamless platform-to-platform integration, providing a comprehensive Contact Center as a Service (CCaaS) offering. This provides a complete customer engagement experience solution, a powerful combination for enterprises.
Outstanding customer support is just a few clicks away with Google Cloud’s CCAI. It is reimagining the experience for both customers and agents, and making it easier for you to leverage generative AI to unlock new ways to interface, learn about, and transact with your customers. Workforce Management will enable agents to have smoother scheduling experiences. We look forward to announcing even more enhancements later this year and continuing on this journey with our customers! Check out more about Google Cloud Contact Center AI here.
GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designations. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Read More for the details.
Starting today, you can use AWS Application Migration Service (AWS MGN) to migrate and modernize your applications in the AWS Middle East (UAE) Region.
Read More for the details.
When it comes to their data platforms, organizations want flexibility, predictable pricing, and the best price performance. Today at the Data Cloud & AI Summit, we are announcing BigQuery editions with three pricing tiers — Standard, Enterprise and Enterprise Plus — for you to choose from, with the ability to mix and match for the right price-performance based on your individual workload needs.
BigQuery editions come with two innovations. First, we are announcing compute capacity autoscaling that adds fine-grained compute resources in real-time to match the needs of your workload demands, and ensure you only pay for the compute capacity you use. Second, compressed storage pricing allows you to only pay for data storage after it’s been highly compressed. With compressed storage pricing, you can reduce your storage costs while increasing your data footprint at the same time. These updates reflect our commitment to offer new, flexible pricing models for our cloud portfolio.
With over a decade of continuous innovation and working together with customers, we’ve made BigQuery one of the most unified, open, secure and intelligent data analytics platforms on the market, and a central component of your data cloud. Unique capabilities include BigQuery ML for using machine learning through SQL, BigQuery Omni for cross-cloud analytics, BigLake for unifying data warehouses and lakes, support for analyzing all types of data, an integrated experience for Apache Spark, geospatial analysis, and much more. All these capabilities build on the recent innovations we announced at Google Cloud Next in 2022.
“Google Cloud has taken a significant step to mature the way customers can consume data analytics. Fine-grained autoscaling ensures customers pay only for what they use, and the new BigQuery editions is designed to provide more pricing choice for their workloads.” — Sanjeev Mohan, Principal at SanjMo & former Gartner Research VP.
With our new flexible pricing options, the ability to mix and match editions, and multi-year usage discounts, BigQuery customers can gain improved predictability and lower total cost of ownership. In addition, with BigQuery’s new granular autoscaling, we estimate customers can reduce their current committed capacity by 30-40%.
“BigQuery’s flexible support for pricing allows PayPal to consolidate data as a lakehouse. Compressed storage along with autoscale options in BigQuery helps us provide scalable data processing pipelines and data usage in a cost-effective manner to our user community.” – Bala Natarajan, VP Enterprise Data Platforms at PayPal.
BigQuery editions allow you to pick the right feature set for individual workload requirements. For example, the Standard Edition is best for ad-hoc, development, and test workloads, while Enterprise has increased security, governance, machine learning and data management features. Enterprise Plus is targeted at mission-critical workloads that demand high uptime, availability and recovery requirements, or have complex regulatory needs. The table below describes each packaging option.
BigQuery autoscaler manages compute capacity for you. You can set up maximum and optional baseline compute capacity, and let BigQuery take care of provisioning and optimizing compute capacity based on usage without any manual intervention on your part. This ensures you get sufficient capacity while reducing management overhead and underutilized capacity.
Unlike alternative VM-based solutions that charge for a full warehouse with pre-provisioned, fixed capacity, BigQuery harnesses the power of a serverless architecture to provision additional capacity in increments of slots with per-minute billing, so you only pay for what you use.
“BigQuery’s new pricing flexibility allows us to use editions to support the needs of our business at the most granular level.” — Antoine Castex, Group Data Architect at L’Oréal.
Here are a few examples of customers benefiting from autoscaling:
Retailers experiencing spikes in demand, scaling for a few hours a couple of times per year
Analysts compiling quarterly financial reports for the CFO
Startups managing unpredictable needs in the early stages of their business
Digital natives preparing for variable demand during new product launches
Healthcare organizations scaling usage during seasonal outbreaks like the flu
As data volumes grow exponentially, customers find it increasingly complex and expensive to store and manage data at scale. With the compressed storage billing model you can manage complexity across all data types while keeping costs low.
Compressed storage in BigQuery is grounded in our years of innovation in storage optimization, columnar compression, and compaction. With this feature, leader in security operations Exabeam has achieved a compression rate of more than 12:1 and can store more data at a lower cost which helps their customers solve the most complex security challenges. As customers migrate to BigQuery editions or continue to leverage the on-demand model, they can take advantage of the compressed storage billing model to store more data cost-efficiently.
Starting on July 5, 2023, BigQuery customers will no longer be able to purchase flat-rate annual, flat-rate monthly, and flex slot commitments. Customers already leveraging existing flat-rate pricing can begin migrating their flat and flex capacity to the right edition based on their business requirements, with options to move to edition tiers as their needs change.
Taking into account BigQuery’s serverless functionality, query performance, and capability improvements, we are increasing the price of the on-demand analysis model by 25% across all regions, starting on July 5, 2023.
Irrespective of which pricing model you choose, the combination of these innovations with multi-year commitment usage discounts, can help you lower your total cost of ownership. Refer to the latest BigQuery cost optimization guide to learn more.
Customers will receive more information about the changes coming to BigQuery’s commercial model through a Mandatory Service Announcement email in the next few days. In the meantime, check out the FAQs, pricing information and product documentation, and register for the upcoming BigQuery roadmap session on April 5, 2023 to learn more about BigQuery’s latest innovations.
1. All Google Cloud Platform wide certifications including ISO 9001, ISO 27001, SOC 1-3, PCI (link for full list)
2. Roadmap functionality
Read More for the details.
Migrating off expensive, legacy databases, wherever they may be, remains a top priority for enterprises. Last year, we announced the general availability of AlloyDB, a fully-managed, PostgreSQL-compatible database service that provides a powerful option for modernizing your most demanding enterprise database workloads and for existing PostgreSQL users looking to scale with no application changes. Not only have customers given us great feedback on the performance and scalability of AlloyDB, they’ve also shown a surge of interest in migrating completely off expensive, legacy databases that have unfriendly licensing practices. Customers have asked us to help them accelerate their legacy database modernization by bringing a fully supported AlloyDB edition to wherever their workloads run, whether on-premises, at the edge, or in other clouds.
Today, we’re excited to announce the technology preview of AlloyDB Omni, a downloadable edition of AlloyDB designed to run on premises, at the edge, across clouds, or even on developer laptops. AlloyDB Omni offers the AlloyDB benefits you’ve come to love, including high performance, PostgreSQL-compatibility, and Google Cloud support, all at a fraction of the cost of legacy databases.
AlloyDB Omni is powered by the same engine that underlies the cloud-based AlloyDB service, which you can use to modernize your legacy databases in place, as part of your journey to the cloud. In our performance tests, AlloyDB Omni is more than 2x faster than standard PostgreSQL for transactional workloads, and delivers up to 100x faster analytical queries than standard PostgreSQL. Download the free developer edition today at https://cloud.google.com/alloydb/omni.
Like many enterprises, your organization probably wants to get off legacy databases to eliminate unfriendly licensing relationships, expensive fees, and vendor lock-in. However, you can’t always move as fast as you want because your workloads may be restricted to on-premises data centers to meet regulatory or data sovereignty requirements, or you may be running applications at the edge, such as at a retail store.
As a stepping stone to the cloud, customers have asked us for a path to support in-place modernization with a database that not only exceeds the performance and manageability of legacy databases, but also includes the full enterprise support of Google Cloud.
So we decided to do something unusual for a cloud provider: we took some of the best of AlloyDB and the best of our high-end enterprise support capability, and delivered a high-performance, downloadable version that you can leverage for in-place database modernization as part of your cloud journey, all at a fraction of the cost of legacy databases. AlloyDB Omni also runs in Google Distributed Cloud Hosted for isolated environment requirements as part of our expanding catalog of services.
“Multicloud, hybrid cloud, and edge computing are here to stay,” said Carl Olofson, Research Vice President, Data Management Software, IDC. “While the cloud is often the final destination, many organizations have workloads that can’t be immediately moved, creating the need for a downloadable, cross-platform database to run enterprise applications. AlloyDB Omni is a great move for Google Cloud because it meets customers where they are rather than requiring them to change platforms. A PostgreSQL-compatible database with accelerated performance, an integrated analytics engine, and ML-driven management fills a critical need in the database market, and AlloyDB Omni is a compelling option for organizations looking for a proven database that’s supported by a major vendor and offers cloud and non-cloud deployment options.”
AlloyDB Omni includes many of the core database engine innovations of the fully-managed AlloyDB service including improved transactional performance compared to standard PostgreSQL, accelerated analytical processing, automatic vacuum and memory management, and an index advisor that helps optimize frequently run queries.
The AlloyDB Omni index advisor helps alleviate the guesswork of tuning query performance by conducting a deep analysis of the different parts of a query including subqueries, joins, and filters. It periodically analyzes the database workload, identifies queries that can benefit from indexes, and recommends new indexes that can increase query performance.
AlloyDB Omni also includes the AlloyDB columnar engine, which keeps frequently queried data in an in-memory columnar format for faster scans, joins, and aggregations. AlloyDB Omni uses machine learning to automatically organize your data between row-based and columnar formats, convert the data when needed, and choose between columnar and row-based execution plans. This delivers excellent performance for a wide range of queries, with minimal management overhead.
“At CME Group, we have extremely high transaction throughput and operational requirements from our customers, combined with stringent uptime and availability demands,” says Sunil Cutinho, Chief Information Officer at CME Group, the world’s leading derivatives exchange, where clients trade futures and options across every investable asset class. “AlloyDB gives us the performance and scalability we need, and AlloyDB Omni enables us to begin modernizing and migrating to Google Cloud, while maintaining our customer commitments and supporting our most mission critical traditional legacy workloads.”
To get you started, AlloyDB Omni offers a free developer edition for non-production use, which can be easily installed on developer laptops. When it’s time to move an application to a production-ready environment, it will run unchanged on AlloyDB Omni in any environment, or on the AlloyDB for PostgreSQL service in Google Cloud. If needed, you can use standard open source PostgreSQL tools to migrate or replicate your data. You can also use standard open source PostgreSQL tools for database operations like backup and replication.
Because AlloyDB Omni uses standard PostgreSQL drivers, existing PostgreSQL applications are immediately AlloyDB Omni-compatible. In addition, AlloyDB Omni provides full compatibility with PostgreSQL extensions and configuration flags, allowing you to leverage the full PostgreSQL ecosystem.
To give you peace of mind, when AlloyDB Omni is made generally available (GA), Google Cloud will offer full enterprise support, including 24/7 technical support and software updates for security patches and features.
Are you ready to try AlloyDB Omni? You can read more about it in the documentation, and you can complete this signup form to get access to the technology preview. As part of the download process, you will also accept the Terms of Service. Note that AlloyDB Omni is not yet suitable for production use. We look forward to your feedback!
Read More for the details.
Even in today’s changing business climate, our customers’ needs have never been more clear: They want to reduce operating costs, boost revenue, and transform customer experiences. Today, at our third annual Google Data Cloud & AI Summit, we are announcing new product innovations and partner offerings that can optimize price-performance, help you take advantage of open ecosystems, securely set data standards, and bring the magic of AI and ML to existing data, while embracing a vibrant partner ecosystem. Our key innovations will enable customers to:
Improve data cost predictability using BigQuery editions
Break free from legacy databases with AlloyDB Omni
Unify trusted metrics across the organization with Looker Modeler
Extend AI & ML insights to BigQuery and other third-party platforms
In the face of fast-changing market conditions, organizations need smarter systems that provide the required efficiency and flexibility to adapt. That is why today, we’re excited to introduce new BigQuery pricing editions along with innovations for autoscaling and a new compressed storage billing model.
BigQuery editions provide more choice and flexibility for you to select the right feature set for various workload requirements. You can mix and match among Standard, Enterprise, and Enterprise Plus editions to achieve the preferred price-performance by workload.
BigQuery editions include the ability for single or multi-year commitments at lower prices for predictable workloads and new autoscaling that supports unpredictable workloads by providing the option to pay only for the compute capacity you use. And unlike alternative VM-based solutions that charge for a full warehouse with a pre-provisioned, fixed capacity, BigQuery harnesses the power of a serverless architecture to provision additional capacity in granular increments to help you not overpay for underutilized capacity. Additionally, we are offering a new compressed storage billing model for BigQuery editions customers, which can reduce costs depending on the type of data stored.
For many organizations, reducing costs means migrating from expensive legacy databases. But sometimes, they can’t move as fast as they want, because their workloads are restricted to on-premises data centers due to regulatory or data sovereignty requirements, or they’re running their application at the edge. Many customers need a path to support in-place modernization with AlloyDB, our high performance, PostgreSQL-compatible database, as a stepping stone to the cloud.
Today, we’re excited to announce the technology preview of AlloyDB Omni, a downloadable edition of AlloyDB designed to run on-premises, at the edge, across clouds, or even on developer laptops. AlloyDB Omni offers the AlloyDB benefits you’ve come to love, including high performance, PostgreSQL compatibility, and Google Cloud support, all at a fraction of the cost of legacy databases. In our performance tests, AlloyDB Omni is more than 2x faster than standard PostgreSQL for transactional workloads, and delivers up to 100x faster analytical queries than standard PostgreSQL. Download the free developer offering today at https://cloud.google.com/alloydb/omni.
And to make it easy for you to take advantage of our open data cloud, we’re announcing Google Cloud’s new Database Migration Assessment (DMA) tool, as part of the Database Migration Program. This new tool provides easy-to-understand reports that demonstrate the effort required to move to one of our PostgreSQL databases — whether it’s AlloyDB or Cloud SQL. Contact us today at g.co/cloud/migrate-today to get started with your migration journey.
Data-driven organizations need to know they can trust the data in their business intelligence (BI) tools. Today we are announcing Looker Modeler, which allows you to define metrics about your business using Looker’s innovative semantic modeling layer. Looker Modeler is the single source of truth for your metrics, which you can share with the BI tools of your choice, such as PowerBI, Tableau, and ThoughtSpot, or Google solutions like Connected Sheets and Looker Studio, providing users with quality data to make informed decisions.
In addition to Looker Modeler, we are also announcing BigQuery data clean rooms, to help organizations to share and match datasets across companies while respecting user privacy. In Q3, you should be able to use BigQuery data clean rooms to share data and collaborate on analysis with trusted partners, all while preserving privacy protections. One common use case for marketers could be combining ads campaign data with your first-party data to unlock insights and improve campaigns.
We are also extending our vision for data clean rooms with several new partnerships. Habu will integrate with BigQuery to support privacy safe data orchestration and their data clean room service. LiveRamp on Google Cloud will enable privacy-centric data collaboration and identity resolution right within BigQuery to help drive more effective data partnerships. Lytics is a customer data platform built on BigQuery, to help activate insights across marketing channels.
BigQuery ML, which empowers data analysts to use machine learning through existing SQL tools and skills, saw over 200% year overview growth in usage in 2022. Since BigQuery ML became generally available in 2019, customers have run hundreds of millions of prediction and training queries. Google Cloud provides infrastructure for developers to work with data, AI, and ML, including Vertex AI, Cloud Tensor Processing Units (TPUs), and the latest GPUs from Nvidia. To bring ML closer to your data, we are announcing new capabilities in BigQuery that will allow users to import models such as PyTorch, host remote models on Vertex AI, and run pre-trained models from Vertex AI.
Building on our open ecosystem for AI development, we’re also announcing partnerships to bring more choice and capabilities for customers to turn their data into insights from AI and ML, including new integrations between:
DataRobot and BigQuery provide users with repeatable code patterns to help developers modernize deployment and experiment with ML models more quickly.
Neo4j and BigQuery, allowing users to extend SQL analysis with graph data science and ML using BigQuery, Vertex AI and Colab notebooks.
ThoughtSpot and multiple Google Cloud services — BigQuery, Looker, and Connected Sheets — which will provide more AI-driven, natural language search capabilities to help users more quickly get insights from their business data.
Over 900 software partners power their applications using Google’s Data Cloud. Partners have extended Google Cloud’s open ecosystem by introducing new ways for customers to accelerate their data journeys. Here are a few updates from our data cloud partners:
Crux Informatics is making more than 1,000 new datasets available on Analytics Hub, with plans to increase to over 2,000 datasets later this year.
Starburst is deepening its integration with BigQuery and Dataplex so that customers can bring analytics to their data no matter where it resides, including data lakes, multi and hybrid cloud sources.
Collibra introduced new features across BigQuery, Dataplex, Cloud Storage, and AlloyDB to help customers gain a deeper understanding of their business with trusted data.
Informatica launched a cloud-native, AI-powered master data management service on Google Cloud to make it easier for customers to connect data across the enterprise for a contextual 360-degree view and insights in BigQuery.
Google Cloud Ready for AlloyDB is a new program that recognizes partner solutions that have met stringent integration requirements with AlloyDB. Thirty partners have already achieved the Cloud Ready – AlloyDB designation, including Collibra, Confluent, Datadog, Microstrategy, and Striim.
At Google Cloud, we believe that data and AI have the power to transform your business. Join all our sessions at the Google Data Cloud & AI Summit for more on the announcements we’ve highlighted today. Dive into customer and partner sessions, and access hands-on content on the summit website. Finally, join our Data Cloud Live events series happening in a city near you.
Read More for the details.
Hopefully you caught some of the buzz today, that we’ve launched a little something new: AlloyDB Omni. It’s a downloadable edition of AlloyDB designed to run wherever you need it. That might be on premises, at the edge, across clouds, or even on developer laptops. AlloyDB Omni is powered by the same engine as the cloud-based AlloyDB service, and in our performance tests is more than 2x faster than standard PostgreSQL for transactional workloads, and delivers up to 100x faster analytical queries than standard PostgreSQL.
With AlloyDB Omni, you get an enhanced version of PostgreSQL that you can run virtually anywhere, with big performance benefits, automatic memory and vacuum management, AI-assisted index suggestions, and so much more. We will also offer Google Cloud Premium Support for enterprise class mission-critical workloads. You can find more info in the launch announcement.
You might be asking yourself, “How can I get this in my hands right now?” and we’ll tell you: Start by filling out the preview signup form. Then you’ll get a link to the download and installation instructions. The preview only works on Linux, but if you’re on an Intel-based Mac or Windows it can be installed and run in virtualization no problem.
As you walk through those instructions, one thing you want to watch out for is disk space. AlloyDB Omni needs more space than the default Google Compute Engine boot disk allocation (10 GB). So if you’re deploying it to GCE, you should allocate at least 20GB of storage space to hold the downloaded software and your initial database. If you plan to test AlloyDB Omni with a larger amount of data, plan accordingly.
AlloyDB Omni is ideal when you need a highly scalable and performant version of PostgreSQL, but you can’t run in the cloud due to regulatory or data sovereignty requirements, or when you want to migrate off of legacy databases fast and modernize in-place as a first step in your journey to the cloud — or maybe you just need a local database. It’s great for on-the-edge use cases (such as retail stores) when you need a database that will keep running when disconnected from the internet and the cloud, or when you need to run close to users to minimize latency, or when you just want a development database locally. AlloyDB Omni is also a good fit when you need to run the same database in multiple clouds e.g. if you offer services to your customers across different clouds.
AlloyDB Omni is designed to run almost anywhere you can run Debian or Red Hat Linux including in virtualized environments such as VirtualBox on your local dev machine or laptop; you can also install and run it on AWS or Azure compute instances with similar Linux environments.
As you play around with AlloyDB Omni (it’s in technology preview, so please don’t run it in production yet!) and you like what you see, and you happen to have a production application that could benefit from AlloyDB features, you should consider using the AlloyDB service on Google Cloud. You will see additional benefits in the cloud including automatic management of backups, failovers, and HA, built-in support for scaling out with read pools, and automatic maintenance windows with minimal downtime.
Get started by filling out the AlloyDB Omni Technology Preview signup form here, and please reach out to us on our Discord server in the cloud-databases channel!
Read More for the details.
Okay, so I initially suggested that I deliver the content of this blog as an interpretive dance video. My suggestion was turned down, and I’m sure you’re as disappointed as I am. But dancing or not, I’m really excited about Generative AI support in Vertex AI.
Vertex AI was launched in 2021 to help fast-track ML model development and deployment, from feature engineering to model training to low-latency inference, all with enterprise governance and monitoring. Since then, customers like Wayfair, Vodafone, Twitter, and CNA have accelerated their ML projects with Vertex AI, and we’ve released hundreds of new features.
But we didn’t stop there — Vertex AI recently had its biggest update yet. Generative AI support in Vertex AI offers the simplest way for teams to take advantage of an array of generative models. Now it’s possible to harness the full power of generative AI built directly in our end-to-end machine learning platform.
In the last few months, consumer-grade generative AI has captured the attention of millions, with intelligent chatbots and lifelike digital avatars. Realizing the potential of this technology means putting it in the hands of every developer, business, and government. To date, it’s been difficult to access generative AI and customize foundation models for business use cases because managing these large models in production is a difficult task, requiring an advanced toolkit, lots of data, specialized skills, and even more time.
Generative AI support in Vertex AI makes it easier for developers and data scientists to access, customize, and deploy foundation models from a simple user interface. We provide a wide range of tools, automated workflows, and starting points. Once deployed, foundation models can be scaled, managed, and governed in production using Vertex AI’s end-to-end MLOps capabilities and fully-managed AI infrastructure.
Vertex AI recently added two new buckets of features: Model Garden, and Generative AI Studio. In this blog, we dive deeper into these features and explore what’s possible.
Model Garden provides a single environment to search, discover, and interact with Google’s own foundation models, and in time, hundreds of open-source and third-party models. Users will have access to more than just text models — they will be able to build next-generation applications with access to multimodal models from Google across vision, dialog, code generation, and code completion. We’re committed to providing choice at every level of the AI stack, which is why Model Garden will include models from both open-source partners and our ecosystem of AI partners. With a wide variety of model types and sizes available in one place, our customers will have the flexibility to use the best resource for their business needs.
From Model Garden, users can kick off a variety of workflows, including using the model directly as an API, tuning the model in Generative AI Studio, or deploying the model directly to a data science notebook in Vertex AI.
Generative AI Studio is a managed environment in Vertex AI where developers and data scientists can interact with, tune, and deploy foundation models. Generative AI Studio provides a wide range of capabilities including a chat interface, prompt design, prompt tuning, and even the ability to fine-tune model weights. From Generative AI Studio, users can implement newly-tuned models directly into their applications or deploy models to production on Vertex AI’s ML platform. With both tools that help application developers and data scientists contribute to building generative AI, organizations can bring the next generation of applications to production faster, and with more confidence.
1. Use foundation models as APIs: We’re making Google’s foundation models available to use as APIs, including text, dialogue, code generation and completion, image generation, and embeddings. Vertex AI’s managed endpoints make it easy to build generative capabilities into an application, requiring only a few lines of code, just like any other Google Cloud API. Developers do not need to worry about the complexities of provisioning storage and compute resources, or optimizing the model for inference.
2. Prompt design: Generative AI Studio provides an easy-to-use interface for prompt design, which is the process of manually creating text inputs, or prompts, that inform a foundation model. The familiar chat-like experience enables people without developer expertise to interact with a model. Users can also configure the system well beyond the chat interface. For example, they can control the temperature of responses, which means they can control whether the responses have higher accuracy or higher creativity.
3. Prompt tuning: Prompt tuning is an efficient, low-cost way of customizing a foundation model without retraining it. Prompts are how we guide the model to generate useful output, using natural language rather than a programming language. In Generative AI Studio, it’s easy to upload user data that is then used to prompt the model to behave in a specific way. For example, if a user wants to update the PaLM language model to speak in their brand voice, they can simply upload brand documents, tweets, press releases, and other assets to Generative AI Studio.
4. Fine-tuning: Fine-tuning in Generative AI Studio is a great option for organizations that want to build highly differentiated generative AI offerings. Fine-tuning is the process of further training a pre-trained model on new data, resulting in changes to the model’s weights. This is helpful for use cases that require outputs with specialized results, like legal or medical vocabulary. In Vertex AI Generative AI Studio, users can upload large data sets and re-train models using Vertex AI Training. Google Cloud offers you the ability to fine-tune your model without exposing the changes in the weights outside your protected tenet. This enables you to use the power of foundation models without your data ever leaving your control.
5. Cost optimization: At Google, we have run these models in our production workloads for several years, and in that time, we’ve developed several techniques to optimize inference for cost. We offer optimized model selection (OMS), which looks at what is being asked of the model and routes the request to the smallest model that can effectively respond to it. When enabled, this happens in the background and is invoked based on different conditions.
“Since its launch, Vertex AI has helped transform the way CNA scales AI, better managing machine learning models in production,” says Santosh Bardwaj, SVP, Global Chief Data & Analytics Officer at CNA. “With large model support on Vertex AI, CNA can now also tailor its insights to best suit the unique business needs of customers and colleagues.”
“Google Cloud has been a strategic partner for Deutsche Bank, working with us to improve operational efficiency and reshape how we design and deliver products for our customers,” says Gil Perez, Chief Innovation Officer, Deutsche Bank. “We appreciate their approach to Responsible AI and look forward to co-innovating with their advancements in generative AI, building on our success to date in enhancing developer productivity, boosting innovation, and increasing employee retention.”
New business-ready generative AI products are available today to select developers in the Google Cloud trusted tester program.
Visit our AI on Google Cloud webpage or join me at the Google Data Cloud & AI Summit, live online March 29, to learn more about our new announcements. Who knows, I may even throw in some dance moves.
Read More for the details.
Generative AI is quickly becoming a strategic imperative for businesses and organizations across all industries. Yet many organizations face a barrier to adopting generative AI because they don’t have access to the latest models or the AI infrastructure to support large workloads. These barriers prevent organizations from innovating into the next era of AI.
We’re excited to continue the Google Cloud and NVIDIA collaboration to help companies accelerate generative AI and other modern AI workloads in a cost-effective, scalable, and sustainable way. We’re bringing together best-in-class GPUs from NVIDIA for large model inference and training with the latest AI models and managed tools for generative AI from Google Cloud. We believe for customers to innovate with AI, they also need the best supporting technology—that’s why NVIDIA and Google Cloud are also coming together to offer leading capabilities for data analytics and integration of the best open-source tools.
In this blog, we highlight ways Google Cloud and NVIDIA are teaming up to help the most innovative AI companies succeed.
Google Cloud and NVIDIA are partnering to provide leading capabilities across the AI stack that will help customers take advantage of one of the most influential technologies of our generation: generative AI. In March, Google Cloud launched Generative AI support in Vertex AI, which makes it possible for developers to access, tune, and deploy foundation models. For companies to effectively scale generative AI in production, they need high-efficiency, performant GPUs to support these large AI workloads.
At GTC, NVIDIA announced that Google Cloud is the first cloud provider to offer the NVIDIA L4 Tensor Core GPU, which is purpose-built for large inference AI workloads like generative AI. L4 GPU will be integrated with Vertex AI and delivers cutting-edge performance-per-dollar for AI inference workloads that run on GPUs in the cloud. Compared to previous-generation instances, the new G2 VM powered by NVIDIA L4 instance delivers up to 4x more performance. As a universal GPU offering, G2 VM instances also accelerate other workloads, offering significant performance improvements on HPC, graphics, and video transcoding. Currently in private preview, G2 VMs are both powerful and flexible, and scale easily from one up to eight GPUs.
We are also excited to work with NVIDIA to bring our customers the highest performance GPU offering for generative AI training workloads. With optimized support on Vertex AI for both A100 and L4 GPUs, users can both train and deploy generative AI models with the highest performance available on GPUs today.
We’re excited to offer NVIDIA AI Enterprise software on Google Marketplace. NVIDIA AI Enterprise is a suite of software that accelerates the data science pipeline and streamlines development and deployment of production AI. With over 50 frameworks, pretrained models and development tools, NVIDIA AI Enterprise is designed to accelerate enterprises to the leading edge of AI, while also simplifying AI to make it accessible to every enterprise.
The latest release supports NVIDIA L4 and H100 Tensor Core GPUs, as well as prior GPU generations including A100 and more.
We’ve worked with NVIDIA to make a wide range of GPUs accessible across Vertex AI’s Workbench, Training, Serving, and Pipeline services to support a variety of open-source models and frameworks. Whether an organization wants to accelerate their Spark, Dask and XGBoost pipelines or leverage PyTorch, TensorFlow, Keras or Ray frameworks for larger deep learning workloads, we have a range of GPUs throughout the Vertex AI Platform that can meet both performance and budget needs. These offerings allow users to take advantage of OSS frameworks and models in a managed and scalable way to accelerate the ML development and deployment lifecycle.
Different workloads require different cluster configurations, owing to different goals, data sets, complexities, and timeframes. So, having a one-size-fits-all Spark cluster always at the ready is just not cost-effective or appropriate. Google has partnered with NVIDIA to make GPU-accelerated Spark available to Dataproc customers using the RAPIDS suite of open-source software libraries and APIs for executing data science pipelines entirely on GPUs, so that customers can tailor their Spark clusters to AI/ML workloads.
NVIDIA has been working with the Spark open-source community to implement GPU acceleration in the latest Spark version (3.x). This new version of Spark will let Dataproc customers accelerate various Spark-based AI/ML and ETL workloads without any code changes. Running on GPUs provides latency and cost improvements during the data preparation and model training. Data science teams can tackle larger data sets, iterate faster, and tune models to maximize prediction accuracy and business value.
Google and NVIDIA are focused on helping users reduce the carbon footprint of their digital workloads. In addition to operating the cleanest cloud infrastructure in the industry, Google partners with NVIDIA to offer GPUs that can help increase the energy efficiency of computationally intensive workloads like AI. Accelerated computing not only delivers the best performance, it is also the most energy-efficient compute as well, and is essential to realizing AI’s full potential. For example, looking at the Green500 list of the world’s most efficient supercomputers: GPU-accelerated systems are 10x more energy-efficient compared to CPU-only systems. And when you carefully choose the Google Cloud region and the right GPU for training large models, Google researchers found you can reduce the carbon emissions of AI/ML training by as much as 1,000x.
Since data center location is such an important factor in reducing carbon emissions of the workload, Google Cloud users are presented with low-carbon icons in the resource creation workflow to help them choose the most carbon-free location to place NVIDIA GPUs on Google Cloud.
A handful of early customers have been testing G2 and seeing great results in real-world applications. Here are what some of them have to say about the benefits that G2 with NVIDIA L4 GPUs bring:
AppLovin enables developers and marketers to grow with market-leading technologies. Businesses rely on AppLovin to solve their mission-critical functions with a powerful, full-stack solution including user acquisition, retention, monetization and measurement.
“AppLovin serves billions of AI-powered recommendations per day, so scalability and value are essential to our business,” said Omer Hasan, Vice President, Operations at AppLovin. “With Google Cloud’s G2 we’re seeing that NVIDIA L4 GPUs offer a significant increase in the scalability of our business, giving us the power to grow faster than ever before.”
WOMBO aims to unleash everyone’s creativity through the magic of AI, transforming the way content is created, consumed, and distributed.
“WOMBO relies upon the latest AI technology for people to create immersive digital artwork from users’ prompts, letting them create high-quality, realistic art in any style with just an idea,” said Ben-Zion Benkhin, Co-Founder and CEO of WOMBO. “Google Cloud’s G2 instances powered by NVIDIA’s L4 GPUs will enable us to offer a better, more efficient image-generation experience for users seeking to create and share unique artwork.”
Descript’s AI-powered features and intuitive interface fuel YouTube and TikTok channels, top podcasts, and businesses using video for marketing, sales, and internal training and collaboration. Descript aims to make video a staple of every communicator’s toolkit, alongside docs and slides.
“G2 with L4’s AI Video capabilities allow us to deploy new features augmented by natural-language processing and generative AI to create studio-quality media with excellent performance and energy efficiency,” said Kundan Kumar, Head of Artificial Intelligence at Descript.
Workspot believes that the software-as-a-service (SaaS) model is the most secure, accessible and cost-effective way to deliver an enterprise desktop and should be central to accelerating the digital transformation of the modern enterprise.
“The Workspot team looks forward to continuing to evolve our partnership with Google Cloud and NVIDIA. Our customers have been seeing incredible performance leveraging NVIDIA’s T4 GPUs. The new G2 instances with L4 GPUs through Workspot’s remote Cloud PC workstations provide 2x and higher frame rates at 1280×711 and higher resolutions” said Jimmy Chang, Chief Product Officer at Workspot.
We’re excited to continue growing our strategic partnership with NVIDIA, and look forward to ongoing collaboration to bring generative AI services and accelerated cloud computing to customers. Learn more at cloud.google.com/nvidia.
Read More for the details.
Businesses across industries share data, combining first-party data with external sources to obtain new insights and collaborate with partners. In fact, 6000+ organizations share over 275 PBs of data across organizational boundaries every week using BigQuery. Last year, we launched in market Analytics Hub, built on BigQuery, which lets customers share and exchange data cost-effectively, while helping to minimize data movement challenges. However, customers share many types of data, some of which may be subject to more stringent regulatory and privacy requirements. In those cases customers may require extra layers of protection to analyze multi-channel marketing data and securely collaborate with business partners across different industries.
To help, we’re introducing BigQuery data clean rooms, coming in Q3, to help organizations create and manage secure environments for privacy-centric data sharing, analysis, and collaboration across organizations — all without generally needing to move or copy data.
Data clean rooms can help large enterprises to understand audiences while respecting user privacy and data security. For example, with BigQuery data clean rooms:
Retailers can optimize marketing and promotional activities by combining POS data from retail locations and marketing data from CPG companies
Financial Services enterprises can improve fraud detection by combining sensitive data from other financial and government agencies or build credit risk scoring by aggregating customer data across multiple banks
In the healthcare industry, doctors and pharmaceutical researchers can share data within a clean room to learn how patients are reacting to treatments.
These are just a few of the use cases that BigQuery data clean rooms can enable. Let’s take a look at how it works.
Data clean rooms will be available in all BigQuery regions through Analytics Hub, and can be created and deployed in minutes. Customers can use Google Cloud console or APIs to create secure clean room environments, and invite partners or other participants to contribute data.
Data contributors can publish tables or views within a clean room and aggregate, anonymize, and help protect sensitive information. They can also configure analysis rules to restrict the types of queries that can be run against the data. More importantly, adding data to a clean room does not generally require creating a copy or moving the data; it can be shared in-place and remain under the control of the data contributor. Finally, data subscriber customers will be able to discover and subscribe to clean rooms where they can perform privacy-centric queries within their own project.
Shared data within a clean room can be live and up-to-date — any changes to shared tables or views are immediately available to subscribers. Data contributors also receive aggregated logs and metrics to understand how their data is being used within a clean room.
There are no additional costs for BigQuery customers associated with using BigQuery data clean rooms. When collaborating with multiple partners, data contributors only pay for the storage of the data and subscribers of data clean rooms only pay for the queries.
Note: For organizations that need to run general-purpose applications on sensitive data with hardware-backed confidentiality and privacy guarantees, Google Cloud offers Confidential Space, which recently became generally available.
Data clean rooms capabilities in BigQuery are enabled today by our partnerships with Habu and LiveRamp. With Habu, a data collaboration platform for privacy-centric data orchestration, customers like L’Oréal are working with Google Cloud and Habu to help securely share data. “We are thrilled to be among the first to work within the BigQuery and Habu environment. The ability to safely and securely access and analyze more data, without tapping into data science resources, has empowered us to better understand our customers and measure the true impact of our marketing activities,” said Shenan Reed, SVP Head of Media at L’Oréal.
“Our partnership with Google Cloud exemplifies our commitment to deliver frictionless collaboration across an open technology ecosystem. Democratizing clean room access and delivering the privacy-centric tools that brands demand is opening up new avenues for growth for our shared customers,” said Matt Kilmartin, Co-founder and CEO of Habu.
LiveRamp on Google Cloud can enable privacy-centric data collaboration and identity resolution within BigQuery to drive more effective data partnerships. LiveRamp’s solution in BigQuery unlocks the value of a customer’s first-party data and establishes a privacy-centric identity data model that can accurately:
Improve consolidation of both offline and online records for a more accurate and holistic view of customer audience profiles
Activate audiences securely with improved match rates for greater reach and measurement quality
Connect customer and prospect data with online media reports and partner data elements to help improve customer journeys and attribution insights using ML models
“LiveRamp has developed deep support for accurate and privacy-safe data connectivity throughout the Google ecosystem. As one of the first clean room providers on Google Cloud with our Data Collaboration Platform, we have been very excited to watch the evolution of Analytics Hub into a powerful native clean room solution for the entire BigQuery ecosystem. Our continued work with Google Cloud is focused on allowing global clients to more easily connect and collaborate with data, driving more impactful audience modeling and planning, and ensuring that they can extend the utility of data natively on BigQuery both safely and securely,” said Max Parris, Head of Identity Resolution Products, LiveRamp.
Finally, Lytics is a customer data platform built onBigQuery that helps activate insights across marketing channels. Lytics offers a data clean room solution on Google Cloud for Advertisers and Media built from the collaboration with BigQuery. The Lytics data clean room offers features for managing and processing data, ingestion, consolidation, enrichment, stitching, and entity resolution. The solution utilizes Google Cloud’s exchange-level permissions and can unify data without exposing PII, thereby allowing customers to leverage data across their organization, while avoiding data duplication and limiting privacy risks.
Want to learn more about BigQuery data clean rooms? Join the upcoming BigQuery customer roadmap session, speak to a member of the Data Analytics team to inquire about early access, or reach out to Habu to get started today with your data clean room. For marketers who are looking to use a clean room to better understand Google and YouTube campaign performance and leverage first party data in a privacy-centric way, Ads Data Hub for Marketers, built on BigQuery, is Google’s advertising measurement solution.
Read More for the details.
If you’ve been exploring recently-launched consumer generative AI tools like Bardand thinking about how to build similar experiences for your business, Generative AI App Builder, or Gen App Builder for short, is here to get you started.
Gen App Builder is part of Google Cloud’s recently announced generative AI offerings and lets developers, even those with limited machine learning skills, quickly and easily tap into the power of Google’s foundation models, search expertise, and conversational AI technologies to create enterprise-grade generative AI applications.
“Google Cloud’s leading AI technology enables STARZ customers to discover more relevant content, increasing engagement with, and the likelihood of completing the content served to them,” says Robin Chacko, EVP Direct-to-Consumer, STARZ. “We’re excited about how generative AI-powered search will help users find the most relevant content even easier and faster.”
Gen App Builder is exciting because unlike most existing generative AI offerings for developers, it offers an orchestration layer that abstracts the complexity of combining various enterprise systems with generative AI tools to create a smooth, helpful user experience. Gen App Builder provides step-by-step orchestration of search and conversational applications with pre-built workflows for common tasks like onboarding, data ingestion, and customization, making it easy for developers to set up and deploy their apps. With Gen App Builder developers can:
Build in minutes or hours. With access to Google’s no-code conversational and search tools powered by foundation models, organizations can get started with a few clicks and quickly build high-quality experiences that can be integrated into their applications and websites.
Combine the power of foundation models with information retrieval to find relevant, personalized information. Enterprises can build apps that understand user intent via natural language, and surface the right information with associated citations and attributions from a company’s public and private data. They can also fully control what data their applications access and the content or topics they want to address.
Build multimodal apps that can respond with text, images, and other media. Gen App Builder supports not just text, but also other modalities such as images and videos. It allows developers to build apps using a combination of text and images as inputs to find information across documents, photos, and video content, enabling richer customer interactions.
Combine natural conversations with structured flows. Developers can granularly blend the output of foundation models with controls to ground answers in enterprise content, and step-by-step conversation orchestration to guide customers to the right answers.
Provide the ability to transact and connect to third party apps and services. Gen App Builder makes it simple to create digital assistants and bots that not only serve content, but also connect to purchasing and provisioning systems to enable transactions from the conversational UI, and escalate customer conversations to a human agent when the context demands.
Consumers of enterprise applications expect to interact with technology in a seamless, conversational way to quickly find the information they need and act on it. Gen App Builder can help reinvent these customer and employee experiences by ingesting large, complex datasets that are specific to your company–from websites, documents, and transactional systems like billing and inventory, to emails, chat conversations, and more. These AI-powered apps can synthesize information across all of these sources to provide specific, actionable responses, using only the data you have provided.
Some of the most popular uses are in customer service, where generative apps can contribute to increasing revenue, customer satisfaction, and customer loyalty. For example, if a retail customer reaches out to modify an order, a virtual agent can help them change it to another product. The customer doesn’t even need to provide the new product name—they can just upload an image and let the agent guide them through the rest. Watch this demo to see how a retail chatbot can use multimodal capabilities to help a consumer navigate various options on the website, including giving the customer ideas on how to use the product and even helping them complete the purchase with the ability to transact within the conversational UI. This scenario could apply to multiple industries and use cases, ranging from consumer goods and public services, to finance and internal corporate systems like intranets.
Finding the right information from data across the organization is a critical requirement within any enterprise. Yet it can be challenging to build high-quality enterprise search experiences with existing tools. Current systems struggle to understand user intent, are difficult to implement and customize, and don’t provide a high-quality user experience.
One of the most exciting features of Gen App Builder is the ability to combine the power of Google-quality search with generative AI to help enterprises find the most relevant and personalized information when they need it. With Gen App Builder, enterprises can build conversational search experiences across their public and private data in minutes or hours with no coding experience.
Enabling multimodal search across text, images and video within the enterprise is a key aspect of the search experiences in Gen App Builder. In addition to providing high-quality search results, Gen App Builder can conveniently summarize the results and provide corresponding citations in a natural, human-like fashion. Gen App Builder also automatically extracts key information from the data and enables personalized results for users. Watch this demo to see how these capabilities can come together to transform the search experience for employees at a financial services firm. The ability to integrate Google-quality search within the enterprise’s applications means they can enjoy a new level of data utilization, drive increased process efficiencies, and provide delightful experiences to their employees and customers.
“Customers have been shopping at Macy’s for generations. Being able to deliver 360° personalization and contextual recommendations will help ensure that Macy’s is still providing future generations of shoppers with a seamless, exceptional experience,” said Bennett Fox-Glassman, Senior Vice-President, Customer Journey, Macy’s. “We’ve already realized an increase in revenue per visit and conversion rates had great success using Google Cloud’s AI technology and are looking forward to exploring how these latest announcements bring together Natural Language Processing and Generative AI capabilities to deliver next-gen search and conversational experiences for our customers.”
The ability to intuitively interact with complex data across a variety of sources allows organizations to better serve their customers and deliver more relevant offerings. Combined with conversational and fulfillment abilities, the potential for improving customer engagement and employee productivity is immense. We’re excited to see how developers and enterprises use a mix of these capabilities to power new experiences and revenue opportunities.
If you’re interested in a closer look at the Gen App Builder, tune into this session at the Data Cloud & AI Summit. Take a step forward to getting hands-on and join the waitlist for our trusted tester program. And finally, bookmark our generative AIlanding page to keep abreast of the latest news, updates and possibilities from this exciting new world of Gen Apps.
Read More for the details.
Data-driven organizations often struggle with inconsistent metrics due to a lack of shared definitions, conflicting business logic, and an ever-growing number of outdated data extracts. This can lead to KPIs that don’t match, which can erode trust and threaten a thriving data culture, when teams can’t agree on basic metrics like monthly active users or pipeline growth. By defining metrics once and using them everywhere, you can improve time to insight and help reduce risk, which can result in better governance, security, and cost control.
A decade ago, Looker’s innovative semantic model helped advance the modern business intelligence (BI) industry, and today, we’re using that intelligent engine to create a single source of truth for your business — a standalone metrics layer we call Looker Modeler, available in preview in Q2.
By defining metrics and storing them in Looker Modeler, metrics can be consumed everywhere, including across popular BI tools such as Connected Sheets, Looker Studio, Looker Studio Pro, Microsoft Power BI, Tableau, and ThoughtSpot. Looker Modeler also works across cloud databases, querying the freshest data available, avoiding the need to manage data extracts, and helping to minimize the risk of insights and reports (including those that impact financial decisions) being days or weeks out of date.
Looker Modeler expands upon the data collaboration and analysis capabilities users have come to expect from the Looker family over the last decade. Models can be shared with coworkers, and new LookML drafts can be submitted for review from the company’s central data team. The result is consistent accessible metrics that can define data relationships and progress against business priorities for a wide variety of use cases. Companies can define the business representation of their data in a semantic layer, without requiring users to have direct access to the underlying database, avoiding disruptions in the event of changes to related infrastructure.
In addition to the direct integrations with several popular visualization tools, Looker Modeler offers a new SQL interface that allows tools that speak SQL to connect to Looker via JDBC. This new capability is another step toward our vision of making Looker the most open BI platform, bringing trusted data to all types of users through the tools they already use. By reaching users where they use their data, organizations can create data-driven workflows and custom applications that help promote the development of a stronger data culture, drive operational efficiency, and foster innovation.
To learn more about Looker Modeler, watch our session from the Data Cloud & AI Summit: Trusted Metrics Everywhere. To test Looker Modeler in your environment, sign up here.
Read More for the details.
Amazon GuardDuty has added new functionality to its integration with AWS Organizations to make it even simpler to enforce threat detection across all accounts in an organization. Since April 2020, GuardDuty customers can leverage its integrations with AWS Organizations to manage GuardDuty for up to 5,000 AWS accounts, as well as automatically apply threat detection coverage to new accounts added to the organization. In some case, this could still result in coverage gaps, for example, if GuardDuty was not applied to all existing accounts, or if it was unintentionally suspended in individual accounts. Now with a few steps in the GuardDuty console, or one API call, delegated administrators can enforce GuardDuty threat detection coverage for their organization by automatically applying the service to all existing and new accounts, as well as automatically identifying and remediating potential coverage drift. To learn more, see the Amazon GuardDuty account management User Guide.
Read More for the details.
Starting today, you can build, train, and deploy machine learning (ML) models in Europe (Zurich) Region.
Read More for the details.