GCP – Announcing the general availability of Trillium, our sixth-generation TPU
Post Content
Read More for the details.
Post Content
Read More for the details.
Post Content
Read More for the details.
Post Content
Read More for the details.
Starting today, you can use Amazon Route 53 Resolver DNS Firewall and DNS Firewall Advanced in the Asia Pacific (Malaysia) Region.
Route 53 Resolver DNS Firewall is a managed service that enables you to block DNS queries made for domains identified as low-reputation or suspected to be malicious, and to allow queries for trusted domains. In addition, Route 53 Resolver DNS Firewall Advanced is a capability of DNS Firewall that allows you to detect and block DNS traffic associated with Domain Generation Algorithms (DGA) and DNS Tunneling threats. DNS Firewall can be enabled only for Route 53 Resolver, which is a recursive DNS server that is available by default in all Amazon Virtual Private Clouds (VPCs) and that responds to DNS queries from AWS resources within a VPC for public DNS records, VPC-specific domain names, and Route 53 private hosted zones. DNS Firewall provides more granular control over the DNS querying behavior of resources within your VPCs by letting you create “blocklists” for domains you don’t want your VPC resources to communicate with via DNS, or take a stricter, “walled-garden” approach by creating “allowlists” that permit outbound DNS queries only to domains you specify. With DNS Firewall Advanced, you can also configure rules to alert on or block DNS traffic associated with more advanced DNS threats.
Visit the AWS Region Table to see all AWS Regions where Amazon Route 53 is available. Please visit our product page and documentation to learn more about Amazon Route 53 Resolver DNS Firewall and its pricing.
Read More for the details.
This has been a year of major advancements for Chrome Enterprise, as we’ve focused on empowering organizations with an even more secure and productive browsing experience. As this year comes to a close, let’s recap some highlights in case you missed some of the helpful new capabilities available for IT and business users:
Elevating Secure Enterprise Browsing with Chrome Enterprise Premium
We introduced Chrome Enterprise Premium to deliver advanced security features and granular control over your browser environment. This includes enhanced context-aware access controls, which adapt security measures based on user and device conditions, and robust Data Loss Prevention (DLP) tools like watermarking, copy/paste restrictions, and screenshot protection to safeguard sensitive data. They can be applied right in Chrome Enterprise, without the need for additional tools or deployments. Early adopters like Snap have already reported significant security improvements and enhanced productivity. To get a closer look at Chrome Enterprise Premium, read the launch blog.
Expanding Our Security Ecosystem
A strong ecosystem is crucial to any enterprise security strategy, which is why we’ve deepened our collaboration with key security partners like Zscaler, Cisco Duo, Trellix, and Okta to extend Chrome Enterprise’s browser-based protections. Our work together delivers more comprehensive threat defense and smoother security operations. For instance, our device trust integration with Okta ensures that only devices or browser profiles meeting all security requirements can access SaaS applications and sensitive data, providing granular access control with just a few clicks. By working with other solutions, we help organizations enhance their security posture and streamline operations, ensuring customers can maximize value across their technology investments for stronger, integrated defenses. Read more.
Simplifying Management and Gaining Security Insights with Chrome Enterprise Core
Chrome Enterprise Core continues to enhance both security and manageability of the browser environment for IT teams. This year, we introduced critical updates designed to improve visibility into risk and empower organizations to better manage their Chrome security posture. IT teams can now push policies to users that sign into Chrome on iOS and manage policies by groups. We’ve expanded the security insights available in Chrome Enterprise Core at no cost, including visibility into high-risk domains and content transfers, empowering IT teams to proactively identify and address potential threats. Learn more here.
Enhancing Security and Productivity for Google Workspace
Chrome Enterprise continues to refine its management and productivity capabilities for Google Workspace with more seamless profile management and reporting. IT admins can now implement more granular policies specific to Chrome profiles, groups and users. Ensuring users securely access critical resources while maintaining productivity, even if they are using an unmanaged device. Learn more.
Strengthening Governance and Controls for AI-Powered Productivity
Chrome Enterprise is embracing the power of AI to enhance both productivity and security. With innovative AI-powered features like Google Lens in Chrome, tab grouping and “Help Me Write”, users can simplify their workflows. Recognizing the need for organizational oversight, we’ve prioritized giving IT admins robust tools to tailor AI usage to their specific requirements.
This year we launched policies for each feature, plus a unified policy that allows admins to turn on or off Chrome’s GenAI features by default. These controls allow organizations to leverage cutting-edge AI tools while safeguarding sensitive data and aligning with their security and privacy standards. With Chrome Enterprise Premium, enterprises can also apply data controls to unsanctioned GenAI sites for added safeguards. By providing both innovation and governance, Chrome Enterprise helps organizations harness AI responsibly and securely.
Helping Admins with an Updated Security Configuration Guide
To help organizations get enterprise-ready with secure browsing capabilities, we’ve released an updated Security Configuration Guide. This guide provides IT teams clear, actionable recommendations to configure Chrome for optimal security.
The updated guide is designed to help admins establish a robust security posture, with easy-to-follow steps for leveraging the latest security best practices. Access the updated guide here.
Take the Next Step
Ready to experience the future of secure and productive enterprise browsing? Turn on one-click security insights in Chrome Enterprise Core.
As we reflect on the past year, we’re grateful for your continued partnership and look forward to supporting your organization in 2025 and beyond. Wishing you and your team a secure, joyful, and restful holiday season!
Read More for the details.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7g instances are available in the AWS GovCloud (US-East, US-West) Regions. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.
Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS).
To learn more, see Amazon EC2 C7g. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS GovCloud (US) Console.
Read More for the details.
Amazon OpenSearch Service now adds support for Graviton3 based r7gd instances with local NVMe-based SSD storage in six additional regions i.e., Asia Pacific (Hong Kong), Asia Pacific (Jakarta), Asia Pacific (Malaysia), Middle East (UAE), AWS GovCloud (US-East) and AWS GovCloud (US-West). AWS Graviton3-based instances with local NVMe-based SSD storage have up to 45% improved real-time NVMe storage performance compared to comparable Graviton2-based instances.
Graviton3 based r7gd instances have custom-designed processors that enable improved performance for memory-intensive workloads. They offer upto 30 Gbps enhanced networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).
Amazon OpenSearch Service Graviton3 instances are supported on all OpenSearch versions, and Elasticsearch versions 7.9 and 7.10. Please refer to the Amazon OpenSearch Service pricing page for additional information about instance types supported in different regions and their On-Demand and Reserved Instance pricing details.
To learn more about Amazon OpenSearch Service, please visit the product page.
Read More for the details.
Amazon Relational Database Service (RDS) for PostgreSQL announces Amazon RDS Extended Support minor version 11.22-RDS.20241121. We recommend that you upgrade to this version to fix known security vulnerabilities and bugs in prior versions of PostgreSQL. Learn more about the updates and patches in this Extended Support minor version in the Amazon RDS User Guide.
Amazon RDS Extended Support provides you more time, up to three years, to upgrade to a new major version to help you meet your business requirements. During Extended Support, Amazon RDS will provide critical security and bug fixes for your RDS for PostgreSQL databases after the community ends support for a major version. You can run your PostgreSQL databases on Amazon RDS with Extended Support for up to three years beyond a major version’s end of standard support date.
You are able to leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. Learn more about upgrading your database instances, including minor and major version upgrades, in the Amazon RDS User Guide.
Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
Read More for the details.
Amazon Redshift announces the general availability of auto-copy, which simplifies data ingestion from Amazon S3 into Amazon Redshift in the AWS GovCloud (US) Regions. This new feature enables you to set up continuous file ingestion from your Amazon S3 prefix and automatically load new files to tables in your Amazon Redshift data warehouse without the need for additional tools or custom solutions.
Previously, Amazon Redshift customers had to build their data pipelines using COPY commands to automate continuous loading of data from S3 to Amazon Redshift tables. With auto-copy, you can now setup an integration which will automatically detect and load new files in a specified S3 prefix to Redshift tables. The auto-copy jobs keep track of previously loaded files and exclude them from the ingestion process. You can monitor auto-copy jobs using system tables
Amazon Redshift auto-copy from Amazon S3 is now generally available for both Amazon Redshift Serverless and Amazon Redshift RA3 Provisioned data warehouses in the AWS GovCloud (US) Regions. To learn more, see the documentation or check out the AWS Blog.
Read More for the details.
Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) F2 instances, featuring up to 8 FPGAs. Amazon EC2 F2 instances, the second-generation FPGA-powered instances, are purpose built to develop and deploy reconfigurable hardware in the cloud.
You can use F2 instances to power the next generation of FPGA-accelerated solutions in genomics, multimedia processing, big data, network security/acceleration, and cloud-based video broadcasting.
F2 instances are the first FPGA-based instances to feature 16GB of high-bandwidth memory. F2 instances provide up to 8 FPGAs paired with a 3rd generation AMD EPYC (Milan) processor with 3x processor cores (192 vCPU), 2x system memory (2 TiB), 2x NVMe SSD (7.6 TiB), and 4x networking bandwidth (100 Gbps) compared to F1 instances.
F2 instances are now available in the US East (N.Virginia) and Europe (London) AWS Regions in f2.12xl, and f2.48xl sizes.
To learn more about F2 instances, see Amazon EC2 F2 Instances.
Read More for the details.
Today we’re excited to announce Research and Engineering Studio (RES) on AWS Version 2024.12. This release makes it possible to configure your Active Directory (AD) dynamically at runtime, allows Amazon Cognito users to launch Linux virtual desktops, and gives administrators the option to configure SSH access to virtual desktop infrastructure (VDI).
RES administrators can now manage AD parameters and enable Cognito users through the RES UI in the new Identity Management page. AD parameters that were once required when deploying RES are now optional and can be changed at any time after deployment. Admins can also add LDAP filters for users and groups to be more targeted about what AD identities get synced to RES. Cognito can now be used as an identity source and login method to either augment or replace the existing Active Directory and Single Sign-On (SSO) authentication. Cognito users can access Linux VDI sessions in the RES environment just like users that access the environment through SSO. Add Cognito users to RES by manually adding them to the RES Cognito User Pool or activating user self registration from the RES UI.
This release also gives administrators control over SSH access in the RES environment. SSH access to VDI sessions is now deactivated by default and can be reactivated at any time from the Permission Policy page.
See the regional availability page for the list of regions where RES is available.
Check out additional release notes on Github to get started and deploy RES 2024.12.
Read More for the details.
AWS Toolkit for Visual Studio Code now includes Amazon CloudWatch Logs Live Tail, an interactive log streaming and analytics capability which provides real-time visibility into your logs, making it easier to develop and troubleshoot your serverless applications.
The Toolkit for VS Code is an open-source extension for the Visual Studio Code (VS Code) editor. This extension makes it easier for developers to develop, debug locally, and deploy serverless applications that use AWS. This new integration brings the power of Live Tail directly into the VS Code Command Palette. CloudWatch log events can now be streamed in the VS Code Editor as they are ingested in real-time. You can search, filter, and highlight log events of interest, to aid and accelerate troubleshooting, investigations, and root cause analysis.
Amazon CloudWatch Logs Live Tail for AWS Toolkit for Visual Studio Code is available in all AWS Commercial regions.
To learn more, please visit the documentation. For pricing details, check Amazon CloudWatch Pricing.
Read More for the details.
Today, AWS Backup is announcing expanded regional coverage for cross-account management in opt-in Regions (Regions that are disabled by default). Cross-account management helps customers manage and monitor backups across their AWS accounts with AWS Organizations.
With cross-account management in AWS Backup, customers can deploy an organization-wide backup policy using their AWS Organizations’ management account or delegated administrator account, and help maintain compliance across all organizational accounts while reducing account management overhead. Cross-account monitoring allows you to monitor backup activity across all the accounts in your organization from the management account.
For more information on AWS Backup cross-account management, visit the documentation. Get started with AWS Backup today.
Read More for the details.
Amazon Bedrock Guardrails enable you to implement safeguards for your generative AI applications based on your use cases and responsible AI policies. Starting today, we are excited to announce that Amazon Bedrock Guardrails adds multilingual capabilities with support for Spanish and French languages.
Amazon Bedrock Guardrails help you implement safeguards for building safe, generative AI applications by filtering undesirable content, redacting personally identifiable information (PII), and enhancing content safety and privacy. You can configure policies for content filters, denied topics, word filters, PII redaction, and contextual grounding checks to tailor safeguards to your specific use cases and responsible AI policies.
With support for Spanish and French languages, a wider set of users in multiple geographies can now use Bedrock Guardrails to build safer generative AI applications based on their use cases and responsible AI policies.
To learn more about Amazon Bedrock Guardrails, see the product page and the technical documentation.
Read More for the details.
The AI phase of industrial evolution is marked by a profound transformation in how humans and intelligent machines collaborate. The blurring of boundaries between physical and digital systems across the manufacturing landscape is accelerating, driven by advancements in automation, robotics, artificial intelligence, and the Internet of Things.
This interconnectedness creates unprecedented opportunities for efficiency, innovation, and customized production. However, it also exposes manufacturers to a new generation of cyber threats targeting industrial operations, supply chains, and increasingly-sophisticated production processes. Safeguarding these critical assets requires a holistic approach that transcends traditional boundaries and embraces sector-wide collaboration.
To enhance our commitment to the manufacturing and industry sector, today we are announcing a new partnership with the Global Resilience Federation (GRF) by joining four of its affiliate groups: the Business Resilience Council (BRC), the Manufacturing Information Sharing and Analysis Center (MFG-ISAC), the Operational Technology Information Sharing and Analysis Center (OT-ISAC), and the Energy Analytic Security Exchange (EASE). Google Cloud is proud to be the first cloud service provider to partner with the GRF Business Resilience Council and its affiliates.
Through this partnership, Google Cloud will strengthen its commitment to the manufacturing industry by providing critical expertise and advanced security solutions. Our collaboration with industry leaders will focus on fortifying the resilience of manufacturing systems and supply chains against evolving cyber threats. This partnership underscores our dedication to supporting the manufacturing sector’s digital transformation and modernization while ensuring the security and integrity of critical infrastructure.
In today’s interconnected world, safeguarding your organization demands a comprehensive strategy that goes beyond traditional measures. Google Cloud will devote resources and experts to work alongside industry leaders to transform, secure, and defend the Manufacturing sector and will contribute to the manufacturing companies through a network of resources and expertise spanning IT, OT, industrial operations technology, supply chain, logistics, engineering technology, and product security, specifically designed to navigate the complexities of Industry 4.0 and 5.0.
This collaboration among professionals in cyber and physical security, geopolitical risk, business continuity, disaster recovery, and third-party risk management is critical for organizations with regional, national, and international footprints. In an era where the severity of cyber threats is constantly increasing, resilience is key. Partnerships fostered by GRF provide the knowledge and support necessary to maintain vigilance, manage crises, and navigate response scenarios to enable continuity of your operations.
As a GRF partner and a member of these four groups, Google Cloud will bring experts and resources — including unique insights from Mandiant, our Threat Horizon reports, and the Google Cloud Office of the CISO— to help the manufacturing and industry sector protect against cyberattacks. Google will work with defenders and sector leaders to share knowledge we’ve learned building and deploying secure technology.
This partnership is a continuation of our August 2021 commitment to invest at least $10 billion over five years to advance cybersecurity. This same commitment has enabled us to join other organizations including Health ISAC, Financial Services ISAC, and Electricity ISAC, so we can continue to support the security and resilience of our critical infrastructure across key sectors.
“Partnering with GRF and becoming a member of its affiliated groups BRC, MFG-ISAC, OT-ISAC, and EASE is a critical step in our commitment to help the manufacturing and industrial sectors transform and secure their critical infrastructure,” said Phil Venables, VP and CISO, Google Cloud. “As a leading provider of cloud technologies and security solutions, we recognize the vital role these sectors play in driving economic growth and innovation. This partnership aligns with our dedication to supporting the modernization and resilience of manufacturing and industrial operations in the face of evolving cyber threats. By sharing our expertise and collaborating with industry leaders, we aim to raise awareness, develop innovative solutions, and strengthen the collective defense of these essential industries.”
“As a provider of innovative technology solutions, we recognize the vital role of the manufacturing and industrial sectors in driving our economy. This partnership reflects our commitment to supporting their transformation and strengthening their defenses against evolving cyber threats. Through collaboration and knowledge-sharing, we aim to foster a more secure and resilient future for these essential sectors,” said Nick Godfrey, senior director and global head, Office of the CISO, Google Cloud.
“Phil Venables and Google Cloud have long advocated for collaborative security and collective resilience, and their active role in the BRC and these communities brings invaluable expertise to help build a more secure ecosystem for businesses of all sizes — including their critical vendors and suppliers,” said Mark Orsi, CEO, GRF. “Google Cloud continues its leadership in advancing security and operational resilience across manufacturing, utilities, industrial, and critical infrastructure sectors — ultimately fostering a safer and more sustainable global supply chain.”
Read More for the details.
Your business data sets you apart from the competition. It fuels your innovations, your culture, and provides all your employees a foundation from which to build and explore. Since 2022, enterprises in all industries have turned to Looker Studio Pro to empower their businesses with self-service dashboards and AI-driven visualizations and insights, complete with advanced enterprise capabilities and Google Cloud technical support.
As the Looker community has grown, we’ve gotten more requests for guidance on how users can make their Looker Studio Pro environments even stronger, and tap into more sophisticated features. Those requests have only increased, accelerated by the debut of Studio in Looker, which brings Looker Studio Pro to the broader Looker platform. To help, today we are debuting a new on-demand training course: Looker Studio Pro Essentials.
Looker Studio Pro connects businesses’ need to govern data access with individual employees’ needs to explore, build and ask questions. This Google Cloud Skills Boost course helps users go beyond the basics of setting up reports and visualizations, and provides a deep dive into Looker Studio Pro’s more powerful features and capabilities.
Here’s what you can expect to get from this course:
Gain a comprehensive understanding of Looker Studio Pro: Explore its key features and functionality, and discover how it elevates your data analysis capabilities.
Enhance collaboration: Learn how to create and manage collaborative workspaces, streamline report sharing, and automate report delivery.
Schedule and share reports: Learn how to customize scheduling options to your business, including delivery of reports to multiple recipients via Google Chat and email, based on your sharing preferences.
Ensure data security and control: Become an expert in user management, audit log monitoring, and other essential administrative tasks that can help you maintain data integrity.
Leverage Google Cloud customer care: Learn how to use Google Cloud Customer Care resources to find solutions, report issues, and provide feedback.
From your focus, to your employees, to your customers, your business is unique. That’s why we designed this course to bring value to everyone — from sales and marketing professionals, to data analysts, to product innovators — providing them with the knowledge and skills they need to fully leverage Looker Studio Pro in their own environments. Because in the gen AI era, how you leverage your data and invigorate your employees to do more is the true opportunity. Accelerate that opportunity with the new Looker Studio Pro Essentials course today.
Read More for the details.
For developers and businesses that run applications on Google Kubernetes Engine (GKE), scaling deployments down to zero when they are idle can offer significant financial savings. GKE’s Cluster Autoscaler efficiently manages node pool sizes, but for applications that require complete shutdown and startup (scaling the node pool all the way to and from zero), you need an alternative, as GKE doesn’t natively offer scale-to-zero functionality. This is important for applications with intermittent workloads or varying traffic patterns.
In this blog post, we demonstrate how to integrate the open-source Kubernetes Event-driven Autoscaler (KEDA) to achieve this. With KEDA, you can align your costs directly with your needs, paying only for the resources consumed.
Minimizing costs is a primary driver for scaling to zero, and applies to a wide variety of scenarios. For technical experts, this is particularly crucial when dealing with:
GPU-intensive workloads: AI/ML workloads often require powerful GPUs, which can be expensive to keep running even when idle.
Applications with predictable downtime: Internal tools with specific usage hours — scale down resources for applications used only during business hours or specific days of the week.
Seasonal applications: Scale to zero during the off-season for applications with predictable periods of low activity.
On-demand staging environments: Replicate production environments for testing and validation, scaling them to zero after testing is complete.
Short-term demonstrations: Showcase applications or features to clients or stakeholders, scaling down resources after the demonstration.
Temporary proof-of-concept deployments: Test new ideas or technologies in a live environment, scaling to zero after evaluation.
Development environment: Spin up resources for testing, code reviews, or feature branches and scale them down to zero when not needed, optimizing costs for temporary workloads.
Microservices with sporadic traffic: Scale individual services to zero when they are idle and automatically scale them up when requests arrive, optimizing resource utilization for unpredictable traffic patterns.
Serverless functions: Execute code in response to events without managing servers, automatically scaling to zero when inactive.
KEDA is an open-source, Kubernetes-native solution that enables you to scale deployments based on a variety of metrics and events. KEDA can trigger scaling actions based on external events such as message queue depth or incoming HTTP requests. And unlike the current implementation of Horizontal Pod Autoscaler (HPA), KEDA supports scaling workloads to zero, making it a strong choice for handling intermittent jobs or applications with fluctuating demand.
Let’s explore two common scenarios where KEDA’s scale-to-zero capabilities are beneficial:
1. Scaling a Pub/Sub worker
Scenario: A deployment processes messages from a Pub/Sub topic. When no messages are available, scaling down to zero saves resources and costs.
Solution: KEDA’s Pub/Sub scaler monitors the message queue and triggers scaling actions accordingly. By configuring a ScaledObject resource, you can specify that the deployment scales down to zero replicas when the queue is empty.
2. Scaling a GPU-dependent workload, such as an Ollama deployment for LLM serving
Scenario: An Ollama-based large language model (LLM) performs inference tasks. To minimize GPU usage and costs, the deployment needs to scale down to zero when there are no inference requests.
Solution: Combining HTTP-KEDA (a beta feature of KEDA) with Ollama enables scale-to-zero functionality. HTTP-KEDA scales deployments based on HTTP request metrics, while Ollama serves the LLM.
KEDA offers a powerful and flexible solution for achieving scale-to-zero functionality on GKE. By leveraging KEDA’s event-driven scaling capabilities, you can optimize resource utilization, minimize costs, and improve the efficiency of your Kubernetes deployments. Please remember to validate usage scenarios as scale to zero mechanism can influence workload performance. Scaling to zero can increase latency due to cold starts. When an application scales to zero, it means there are no running instances. When a request comes in, a new instance has to be started, increasing latency.
There are also considerations about state management. When instances are terminated, any in-memory state is lost.
To help you get started quickly, we’ve published a guide for scaling GKE to zero with KEDA, with specific steps for scaling a Pub/Sub worker to zero with KEDA on GKE and scaling a Ollama to zero with KEDA on GKE.
To learn more about KEDA and its various scalers, refer to the official KEDA documentation at https://keda.sh/docs/latest.
Read More for the details.
Dun & Bradstreet, a leading global provider of business data and analytics, is committed to maintaining its position at the forefront of innovation. For the past two years, this commitment has included the company’s deliberate approach to improving its software development lifecycle by infusing AI solutions.
While development velocity and security were the company’s most pressing considerations, Dun & Bradstreet was also inundated with productivity and operational challenges common to many global enterprises which included:
Significant time onboarding new team members
Siloed knowledge of legacy codebases
Low test coverage
Application modernization challenges
To achieve its goal of accelerating software development, Dun & Bradstreet knew it had to take a holistic “people, process, and tools” approach to solve the traditional development lifecycle issues that most enterprise engineering teams face. They looked to AI-assistance to anchor this new effort.
As a provider of information that can move markets and drive economies, Dun & Bradstreet had a high bar for any technology tools, with demanding expectations as high as the financial professionals and government leaders they serve.
Dun & Bradstreet executed a thorough evaluation process to identify the best partner and coding assistance tool, considering both open-source and commercial options. The company ultimately selected Gemini Code Assist due to the Gemini model’s performance, seamless integration with their existing development environment, and robust security features.
The implementation of Gemini Code Assist was a collaborative effort between Dun & Bradstreet’s development teams and the Google Cloud team. The developers who were part of the team were actively involved in the configuration and customization of the tool to ensure that it met their specific needs and workflows.
A key focus area for Dun & Bradstreet was Google’s security stance. Incorporating AI into the development process required both top-grade protection of private data and guardrails to ensure the safety of machine-generated code. Google’s security expertise and guidance allowed Dun & Bradstreet to move forward with confidence due to the following factors:
Gemini models are built in-house, allowing Google to fully validate and filter all source code samples used in model training.
Trust and verify: Integration into a company’s existing coding and review lifecycles allows developers to guide the model outputs with human oversight, without learning a whole new system.
Google’s partnership with Snyk provides additional options for automated security scanning, covering both AI-generated and human-written code.
Google’s AI Principles underpin the architecture and design decisions for Gemini Code Assist. Privacy and security protections include single-tenant storage of customer code references, encrypted logging, and fine-grained administrative controls to prevent accidental data leakage.
Google’s indemnification policies.
“AI-assisted code creation is not just a leap forward in efficiency — it’s a testament to how innovation and security can work hand-in-hand to drive business success,” said Jay DePaul, chief cybersecurity technology risk officer at Dun & Bradstreet. “By embedding robust guardrails, we’re enabling our teams to build faster, safer, and smarter.”
Dun & Bradstreet decided to move forward with Code Assist in October 2024. The solution is now starting to roll out to more teams across the organization. Adoption has been smooth, aided by Code Assist’s intuitive interface and comprehensive documentation.
Having a program for incubation at large organizations helps to iron out both technical and potential adoption blockers. For example, the Dun & Bradstreet team identified the need to educate teams on how coding assistants are there to help developers as a partner, not as replacements.
Now that the rollout is underway, Dun & Bradstreet is sharing the factors that drove their adoption of Gemini Code Assist.
Increased developer productivity: Gemini Code Assist’s AI-powered code suggestions and completions have significantly reduced the time developers spend writing code. The tool’s ability to automate repetitive tasks has freed up time for the developers so they can focus on more complex and creative aspects of their work.
Improved code quality: The automated code review and linting capabilities of Gemini Code Assist helped Dun & Bradstreet’s developers detect errors and potential issues early in the development process. This has led to a significant reduction in bugs and improved overall code quality.
Easier application modernization: A significant amount of time was saved when converting Spring apps to Kotlin.
Increased developer efficiency: Early internal indicators show a 30% increase in developer productivity.
Developer onboarding: New developers at Dun & Bradstreet have been able to ramp up quicker due to the real-time guidance and support provided by Gemini Code Assist.
Enhanced knowledge sharing: Gemini Code Assist has fostered a culture of knowledge sharing within Dun & Bradstreet’s development teams. The tool’s ability to provide code examples and best practices made it easier for developers to learn from each other and collaborate effectively.
Gemini Code Assist has proven to be a valuable solution for Dun & Bradstreet as it has empowered their developers with advanced AI capabilities and intelligent code assistance.
“AI-assisted code creation is a game changer for everyone involved in the solution-delivery business,” said Adam Fayne, vice president for Enterprise Engineering at Dun & Bradstreet. “It enables our teams to innovate, test, and deploy faster, without having to risk security or quality.”
The company has been able to accelerate velocity, improve software quality, and maintain its competitive edge in the market. Companies like Dun & Bradstreet trust Google Cloud and Gemini to greatly enhance their software developer lifecycles. In fact, Google Cloud was recently named a Leader in the 2024 Gartner Magic Quadrant for AI Code Assistants for its Completeness of Vision and Ability to Execute.
Visit the Gemini Code Assist website for more information on use cases and how to start using it today.
Read More for the details.
Ford Pro Intelligence is a cloud-based platform that is used for managing and supporting fleet operations of its commercial customers. Ford commercial customers range from small businesses, large enterprises like United Postal Service and Pepsi where fleets can be thousands of vehicles, and government groups and municipalities like the City of Dallas. The Ford Pro Intelligence platform collects connected vehicle data from fleet vehicles to help fleet operators streamline operations, increase productivity, reduce cost of ownership, and improve their fleet’s performance and overall uptime through the alerts on vehicle health and maintenance.
Telemetry data from vehicles provides a wealth of opportunity, but it also presents a challenge: planning for the future as cars and services evolve. We needed a platform that could support the volume, variety and velocity of vehicle data as automotive innovations emerge, including new types of car sensors, more advanced vehicles, and increased integration with augmented data sources like driver information, local weather, road conditions, maps, and more.
In this blog, we’ll discuss our technical requirements, the decision-making process, and how building our platform with Bigtable, Google Cloud’s flexible NoSQL database for high throughput and low-latency applications at scale, unlocked powerful features for our customers like real-time vehicle health notifications, AI-powered predictive maintenance, and in-depth fleet monitoring dashboards.
We wanted to set some goals for our platform based on our connected vehicle data. One of our primary goals is to provide real-time information for fleet managers. For example, we want to inform our fleet partners immediately if tire pressure is low, a vehicle requires brake maintenance, or there is an airbag activation, so they can take action.
Connected vehicle data can be extremely complex and variable. When Ford Pro set out to build its vehicle telemetry platform, we knew we needed a database that could handle some unique challenges. Here’s what we had to consider:
A diverse and growing vehicle ecosystem: We handle telemetry data from dozens of car and truck models, with new sensors added every year to support different requirements. Plus, we support non-Ford vehicles too!
Connectivity isn’t a guarantee: A “connected” car isn’t always connected. Vehicles go offline due to spotty service or even just driving through a tunnel. Our platform needs to handle unpredictable or duplicated streams of time-series data.
Vehicles are constantly evolving: Manufacturers frequently push over-the-air updates that change how vehicles operate and the telemetry data they generate. This means our data is highly dynamic, and our database needs to support a flexible, ever-evolving schema.
Security is paramount: At Ford, we are committed to our customer’s data privacy and security. It’s imperative to our technology. We serve customers around the world and must ensure we can easily incorporate privacy and security measures while maintaining regulatory compliance, such as GDPR, in every country we operate.
These challenges, along with the application feature requirements, we knew that we needed an operational data store that can support low-latency access for both real-time and historical data with a flexible schema.
The Ford Pro Intelligence platform offers a diverse range of features and services that cater to the diverse needs of our customers. To ensure flexibility in data access, we prioritize real-time reporting of vehicle status, event-based notifications, location services, and historical journey reconstruction. These capabilities necessitate a variety of data access methods to support both real-time and historical data access — all while maintaining low latency and high throughput to meet the demands of Ford customers.
Our starting point was an Apache Druid-based data warehouse that contained valuable historical data. While Apache Druid could handle high-throughput write traffic and generate reports, it was not able to support our low-latency API requirements or high data volumes. As a result, we started working with Google Cloud to explore our options.
We began our search with BigQuery. We already used BigQuery for reporting, so this option would have given us a serverless, managed version of what we already had. While BigQuery was able to perform the queries we wanted, our API team raised concerns about latency and scale — we required single-digit millisecond latency with high throughput. We discussed putting a cache layer in front of BigQuery for faster service of the latest data but soon discovered that it wouldn’t scale for the volume and variety of requests we wanted to offer our customers.
From there, we considered several alternative options, including Memorystore and PostgreSQL. While each of these solutions offered certain advantages, they did not meet some of our specific requirements in several key areas. We prioritized low-latency performance to ensure real-time processing of data and seamless user experiences. Flexibility, in terms of schema design, to accommodate our evolving data structures and wide column requirements was also a must. Scalability was another crucial factor as we anticipated significant growth in data volume and traffic over time.
When we looked at Bigtable, its core features of scalable throughput and low latency made it a strong contender. NoSQL is an ideal option for creating a flexible schema, and Bigtable doesn’t store empty values, which is great for our sparse data and cost optimization. Time-series data is also inherent to Bigtable’s design; all data written is versioned with a timestamp, making it a naturally good fit for use cases with vehicle telemetry data. Bigtable also met our needs for an operational data store and analytics data source, allowing us to handle both of these workloads at scale on a single platform. In addition, Bigtable’s data lifecycle management features are specifically geared toward handling the time-oriented nature of vehicle telemetry data. The automated garbage collection policies use time and version as criteria for purging obsolete data effectively, enabling us to manage storage costs and reduce operational overhead.
In the end, the choice was obvious, and we decided to use Bigtable as our central vehicle telemetry data repository.
We receive vehicle telemetry data as a protocol buffer to a passthrough service hosted on Compute Engine. We then push that data to Pub/Sub for Google-scale processing by a streaming Dataflow job that writes to Bigtable. Ford Pro customers can access data through our dashboard or an API for both historical lookback for things like journey construction and real-time access to see fleet status, position, and activity.
With Bigtable helping to power Ford Pro Telematics, we have been able to provide a number of benefits for our customers, including:
Enabling the API service to access telematics data
Improving data quality with Bigtable’s built-in time series data management features
Reducing operational overhead with a fully managed service
Delivering robust data regulation compliance tooling across regions
The platform provides interactive dashboards that display relevant information, such as real-time vehicle locations, trip history, detailed trip information, live map tracking and EV charging status. Customers can also set up real-time notifications about vehicle health and other important events, such accidents, delays, or EV charging faults. For example, a fleet manager can use the dashboard to track the location of a vehicle and dispatch assistance if an accident occurs.
We leverage BigQuery alongside Bigtable to generate reports. BigQuery is used for long-running reports and analysis, while Bigtable is used for real-time reporting, and direct access to vehicle telemetry. Regular reports are available for fleet managers, including vehicle consumption, driver reimbursement reports, and monthly trip wrap ups. Our customers can also leverage and integrate this data into their own tooling using our APIs, which enable them to query vehicle status and access up to one year of historical data.
The automotive industry is constantly evolving, and with the advent of connected vehicles, there are more opportunities than ever before to improve the Ford commercial customer experience. Adopting a fully managed service like Bigtable allows us to spend less time maintaining our own infrastructure and more time innovating and adding new features to our platform. Our company is excited to be at the forefront of this innovation, and we see many ways that we can use our platform to help our customers.
One of the most exciting possibilities is the use of machine learning to predict vehicle maintenance and create service schedules. By collecting data from vehicle diagnostics over time, we can feed this information into machine learning models that can identify patterns and trends. This will allow us to alert customers to potential problems before they even occur, and to schedule service appointments at the most convenient times.
Another area where we can help our customers is in improving efficiency. By providing insights about charging patterns, EV range, and fuel consumption, we can help fleet managers optimize their operations. For example, if a fleet manager knows that there are some shorter routes for their cars, they can let those cars hit the road without a full charge. This can save time and money, and it can also reduce emissions.
In addition to helping our customers save time and money, we are also committed to improving their safety and security. Our platform can provide alerts for warning lights, oil life, and model recalls. This information can help customers stay safe on the road, and it can also help them avoid costly repairs.
We are already getting great feedback from customers about our platform, and we are looking forward to further increasing their safety, security, and productivity. We believe that our platform has the potential to revolutionize the automotive industry, and we are excited to be a part of this journey.
Learn more about Bigtable and why it is a great solution for automotive telemetry and time series data.
Read more on how others like Palo Alto Networks, Flipkart, and Globo are reducing cloud spend while improving service performance, scalability and reliability by moving to Bigtable.
Ready to unlock the power of Bigtable? Start a free trial today!
Read More for the details.
You can now configure holidays and other variances to your contact center Hours of operation with “overrides” in Amazon Connect, using APIs or the admin website. Overrides are exceptions to your contact center’s standard day-of-the-week operating hours. For example, if your contact center opens at 9am and closes at 10pm, but on New Year’s Eve you want to close at 4pm to allow your agents to get home in time to celebrate, you can add an override to do so. When the holiday arrives and you close your contact center early, callers get the after hours customer experience.
Hours of operations overrides are supported in all AWS regions where Amazon Connect is offered. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website. To learn more about the hours of operations, see the Amazon Connect Administrator Guide.
Read More for the details.