Azure – Public preview: Azure Static Web Apps support for A Record
You can now add an APEX custom domain to your Static Web Apps with A Records
Read More for the details.
You can now add an APEX custom domain to your Static Web Apps with A Records
Read More for the details.
Use the newly available pg_hint_plan extension to tweak PostgreSQL execution plans and semver extension to do semantic versioning in Azure Database for PostgreSQL – Flexible Server.
Read More for the details.
Customers can now take advantage of the unlimited virtualization licensing capability included with the SQL Server Software Assurance with Azure Hybrid Benefit for SQL Server on Azure VMware Solution.
Read More for the details.
You can now make REST or GraphQL requests to a built-in `/data-api` endpoint to retrieve and modify contents of a connected database, without having to write backend code.
Read More for the details.
Azure Backup is announcing public preview of Backup for AKS allowing customers to protect their applications by providing ability to backup and restore AKS clusters.
Read More for the details.
AWS Storage Gateway expands availability to the AWS Asia Pacific (Hyderabad) and AWS Asia Pacific (Melbourne) Regions enabling customers to deploy and manage hybrid cloud storage for their on-premises workloads.
Read More for the details.
At Google Cloud, our north star for security success today is to help customers apply cloud-scale, modern security everywhere they operate. As part of our mission to help customers achieve these objectives, we host a quarterly digital discussion event, Google Cloud Security Talks, where we bring together experts from the Google Cloud family and across the industry at large. We’ve designed these sessions to share insights, best practices, and ways to leverage new capabilities to help increase resilience against modern risks and threats.
The first installment of Google Cloud Security Talks 2023 on March 22 will focus on transforming security with frontline intelligence and the latest security products from Google Cloud. During these Security Talk sessions, you’ll explore the latest threat intelligence, security approaches, and Google Cloud product innovations. You’ll walk away with a better understanding of threat actors and potential attack vectors and fresh ideas for detecting, investigating, and responding to threats faster.
We’ll kick things off with a deep dive session where I’ll chat with Google Cloud’s Jeff Reed, VP of product management about the ever-changing cybersecurity landscape, the latest trends, and how Google Cloud Security can help customers tackle these challenges in 2023.
Here’s a quick peek at the other insightful sessions you can look forward to on our agenda:
Frontline Threat Intel Panel: Watch our panel of threat intelligence experts discuss today’s threat landscape, trends they are seeing, and best practices for leveraging threat intelligence.
Cyber Crime Metamorphosis: Enjoy a deep dive case study with Mandiant experts as they examine how cyber criminals adapt their tactics to chaos.
Managing Open Source Software Security: Learn how to protect your software supply chain by easily incorporating the same OSS packages that Google uses in our own workflows.
What’s New with Cloud Armor: See the latest product innovations available now with Cloud Armor for DDoS protection.
Defeating Cryptomining Attacks with Native Security Controls: Learn how to detect and respond to cryptomining attacks and new security deployment options, including self-service.
State of Cloud Threat Detection & Response Survey Report: Explore the results of a recent cloud security survey to discover the main challenges and opportunities for SecOps teams as they transform to a cloud-first mindset.
Improve Decision Making with Automated Contextual Awareness: See how the Chronicle Security Operations Suite can help you automatically apply valuable context to your events and improve security decision-making.
Still need one last reason to join us? Security Talks is 100% digital and free to attend — so make sure you sign up now to grab a virtual front-row seat to learn about our latest insights and solutions. We’re looking forward to seeing you on March 22.
Read More for the details.
Editor’s note: March is celebrated as Women’s History Month in the US. At Google, we’re proud to celebrate women from all backgrounds and are committed to increasing the number of women in the technology industry. Over the next few weeks, we will showcase women-led startups and how they use Google Cloud to grow their businesses. Today’s feature highlights digital fashion company DRESSX and its co-founder, Daria Shapovalova.
Wouldn’t it be great if your digital wardrobe was just as exciting and stylish — or perhaps even more so — as your physical outfits? Imagine putting on different clothes for social media or your dating profile, or dressing your avatar in the latest high fashions. That’s what we offer atDRESSX. Along with our designer partners we create thousands of digital outfits from the craziest trainers and cat suits to a top that would be perfect for a business meeting.
Buying one of our digital outfits is simple. You upload your photo to the website, make a payment, and DRESSX returns your photo wearing your desired outfit. The fit is amazing, just as if you were wearing the digital clothes in real life. We will also dress your avatar for the metaverse and gaming environments, and our augmented reality range is expanding fast.
We launched in 2020, initially targeting Gen-X and millennial customers. Today, we are the world’s largest retailer of digital-only clothing with more than 50,000 items sold across our website, app, and partner channels. Our partners include Meta, which offers digital clothing to customers and their avatars in the metaverse. Google has collaborated with us on a range of clothing inspired by the Google Pixel phone and we offer a really vibrant clothing range inspired by Coca-Cola and its Dreamworld drink.
Our NFT collection is also expanding. This includes a limited collection of outfits forAdidas Originals andReady Player Me. NFT garments can be worn through AR technology on our app, on metaverse platforms such asDecentraland, or even as game skins. Instead of just storing their NFT asset on a digital wallet, customers get to display their gear to an audience of thousands or even millions.
Sustainability is also one of our core values. The fashion industry generates 10 percent of the world’s greenhouse gas emissions and 20 percent of wastewater1. By releasing digital collections, brands can cut carbon emissions and reduce water from the manufacturing supply chain. When we complete a project for fashion brand that also offers real-life clothing, we calculate for them the amount of CO2 and water that they save. We also collaborated withBCG to provide insights for itsMetaverse and Sustainability in Fashion report.
Our journey offers lessons for other women entrepreneurs who still make up a small percentage of the startup economy. This is something that needs to change if we are going to promote diversity and drive innovation in fashion and other industries.
My advice for aspiring founders is to start as soon as possible. If you can only spare a few hours per week, then focus on engaging with individuals and support groups who can guide you on your journey. That’s how I discoveredGoogle Cloud and its programs for startups and women entrepreneurs. Successful applicants get access to many hours of business and technology training. You are also eligible for credits to invest in Google Cloud tools.
The Google Cloud team also put me in touch withZazmic, a Google Cloud Premier Partner with 400 architects who work with businesses from start-ups to global brands. Zazmic saw the potential of our business and understood our specific challenges in the startup space. They also helped us maximize the value of credits that we received through the Google Cloud in 2020.
As a fast-growing business that needs to scale, Zazmic advised us to deploy onCompute Engine, a highly secure service that enables us to create and run virtual machines on Google Cloud infrastructure. We have also deployedApp Engine, a development environment for our website and mobile applications. The team at Zazmic also guided us to unlock more resources from Google Cloud, including credits, in 2022 to continue to grow our business. This gave us a solid platform when attracting investors and we have now raised a total of $3M in funding.
Looking to the future, our partnership with Zazmic and Google Cloud will help us innovate and maintain our leadership in digital fashion. In the next ten years we believe that everyone will have a digital and a physical wardrobe and that our audience will greatly expand to engage shoppers of all ages. This rapidly growing market is estimated to be worth $50B by 20302.
We are also exploring the potential of ‘phygital’ NFTs underpinned by our sustainability values. We collaborated with a designer who creates outfits from 100% natural materials that can be decomposed even at home. In this instance, he designed a necklace and dress made of dried moss. These pieces will be custom-made for a person who buys the corresponding NFT.
Digital fashion will also support a huge ecosystem of designers, stylists, and models. If fashion is your profession or something you do on the side, we see plenty of opportunities to explore a career in this industry.
When we started DRESSX, no one knew about digital fashion, and everyone thought we were crazy for selling “just air” for $30. Today, everyone from big brands to a growing number of independent designers are putting themselves in the digital fashion storefront. We hope that our journey can inspire female entrepreneurs and founders to start their journey in the retail space.
If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, andsign up for our communications to get a look at our community activities, digital events, special offers, and more.
1. ActNow for Zero-Waste Fashion
2. Metaverse: A $50 Billion Revenue Opportunity for Luxury
Read More for the details.
As deep learning models become increasingly complex and datasets larger, distributed training is all but a necessity. Faster training makes for faster iteration to reach your modeling goals. But distributed training comes with its own set of challenges.
On top of deciding what kind of distribution strategy you want to use and making changes to your training code, you need a way to manage infrastructure, optimize usage of your accelerators, and deal with limited bandwidth between nodes. This added complexity can slow your progress.
In this post, we’ll show you how to speed up training of a PyTorch + Hugging Face model using Reduction Server, a Vertex AI feature that optimizes bandwidth and latency of multi-node distributed training on NVIDIA GPUs for synchronous data parallel algorithms.
Before diving into the details of Reduction Server and how to submit jobs on the Vertex AI training service, it’s useful to understand the basics of distributed data parallelism. Data parallelism is just one way of performing distributed training and can be used when you have multiple accelerators on a single machine, or multiple machines each with multiple accelerators.
To get an understanding of how data parallelism works, let’s start with a linear model. We can think of this model in terms of its computational graph. In the image below, the matmul op takes in the X and W tensors, which are the training batch and weights respectively. The resulting tensor is then passed to the add op with the tensor b, which is the model’s bias terms. The result of this op is Ypred, which is the model’s predictions.
We want a way of executing this computational graph such that we can leverage multiple workers. One way we might do this is by splitting the input batch X in half, and sending one slice to GPU 0 and the other to GPU 1. In this case, each GPU worker calculates the same ops but on different slices of the data.
Adding this additional worker allows us to double the batch size. Each GPU gets a separate slice of data, they calculate the gradients, and these gradients are averaged. So effectively, with two GPUs your batch size becomes 64, and with 4 GPUs it would become 128. By adding more GPUs, your model sees more data on each training step. Which means that it takes less time to finish an epoch, which is just a full pass through the training data. And this is the core idea of data parallelism.
But, we’ve glossed over a key detail here. If both workers calculate the gradients on a different slice of data, then they will compute different gradients. So at the end of the backwards pass, we now have two different sets of gradients.
When you’re doing synchronous data parallel training, you want to take these multiple sets of gradients and turn them into one set. We’ll do this by averaging the gradients in a process known as AllReduce, and use these averaged gradients to update the optimizer.
In order to compute the average, each worker needs to know the values of the gradients computed by all other workers. We want to pass this information between these nodes as efficiently as possible and use as little bandwidth as possible. There are many different algorithms for efficiently implementing this aggregation, such as Ring AllReduce, or other tree based algorithms. On Vertex AI, you can use Reduction Server, which optimizes bandwidth and latency of multi-node distributed training on NVIDIA GPUs for synchronous data parallel algorithms.
To summarize, a distributed data parallel setup works as follows:
Each worker device performs the forward pass on a different slice of the input data to compute the loss.
Each worker device computes the gradients based on the loss function.
These gradients are aggregated (reduced) across all of the devices.
The optimizer updates the weights using the reduced gradients, thereby keeping the devices in sync.
Note that while data parallelism can be used to speed up training across multiple devices on a single machine, or multiple machines in a cluster, Reduction Server works specifically in the latter case.
Vertex Reduction Server introduces an additional worker role, a reducer. Reducers are dedicated to one function only: aggregating gradients from workers. And because of their limited functionality, reducers don’t require a lot of computational power and can run on relatively inexpensive compute nodes.
The following diagram shows a cluster with four GPU workers and five reducers. GPU workers maintain model replicas, calculate gradients, and update parameters. Reducers receive blocks of gradients from the GPU workers, reduce the blocks and redistribute the reduced blocks back to the GPU workers.
To perform the all-reduce operation, the gradient array on each GPU worker is first partitioned into M blocks, where M is the number of reducers. A given reducer processes the same partition of the gradient from all GPU workers. For example, as shown on the above diagram, the first reducer reduces the blocks a0 through a3 and the second reducer reduces the blocks b0 through b3. After reducing the received blocks, a reducer sends back the reduced partition to all GPU workers.
If the size of a gradient array is K bytes, each node in the topology sends and receives K bytes of data. That is almost half the data that the Ring and Tree based all-reduce implementations exchange. An additional advantage of Reduction Server is that its latency does not depend on the number of workers.
Reduction Server can be used with any distributed training framework that uses the NVIDIA NCCL library for the all-reduce collective operation. You do not need to change or recompile your training application.
In the case of PyTorch, you could use the DistributedDataParallel (DDP) or FullyShardedDataParallel (FSDP) distributed training strategies. Once you’ve made the necessary changes to your PyTorch training code, you can leverage Reduction Server by:
Installing the Reduction Server NVIDIA NCCL transport plugging in your training container image.
Configuring a Vertex AI Training custom job that includes a Reduction Server worker pool.
Reduction Server is implemented as an NVIDIA NCCL transport plugin. This plugin must be installed on the container image that is used to run your training application. The plugin is included in the Vertex AI pre-built training containers.
Alternatively, you can install the plugin yourself by including the following in your Dockerfile
Vertex AI provides up to 4 worker pools to cover the different types of machine tasks you would encounter when doing distributed training. You can think of a worker as a single machine. And each worker pool is a collection of machines performing similar tasks.
Worker pool 0 configures the Primary, chief, scheduler, or “master”. This worker generally takes on some extra work such as saving checkpoints and writing summary files. There is only ever one chief worker in a cluster, so your worker count for worker pool 0 will always be 1.
Worker pool 1 is where you configure the rest of the workers for your cluster.
Worker pool 2 manages Reduction Server reducers. When choosing the number and type of reducers, you should consider the network bandwidth supported by a reducer replica’s machine type. In Google Cloud, a VM’s machine type defines its maximum possible egress bandwidth. For example, the egress bandwidth of the n1-highcpu-16 machine type is limited at 32 Gbps.
First, you define the job. The example below assumes your code is structured as a Python source distribution, but a custom container would work as well.
After defining the job, you can call run and specify a cluster configuration that includes reduction server reducers.
The above run config specifies a cluster of 7 machines total:
One chief workerTwo additional workersFour reducers.
Note that the reducers need to run the reduction server container image provided by Vertex AI.
If you’d like to see sample code, check out this notebook.
In general, computationally intensive workloads that require a large number of GPUs to complete training in a reasonable amount of time, and where the trained model has a large number of parameters, will benefit the most from Reduction Server. This is because the latency for standard ring and tree based all-reduce collectives is proportional to both the number of GPU workers and the size of the gradient array. Reduction Server optimizes both: latency does not depend on the number of GPU workers, and the quantity of data transferred during the all-reduce operation is lower than ring and tree based implementations.
One example of a workload that fits this category is pre-training or fine-tuning language models like BERT. Based on exploratory experiments, we saw more than 30% reduction in training time for this type of workload.
The diagrams below show the training performance comparison of a PyTorch distributed training job in a multi-nodes and multi-processing environment with and without Reduction Server. The PyTorch distributed training job was to fine-tune the pretrained BERT large model bert-large-cased from Hugging Face on the imdb dataset for sentiment classification.
In this experiment, we observed that Reduction Server increased the training throughput by 80%, and reduced the training time and therefore the training cost by more than 45%.
This benchmark result highlights Reduction Server’s ability to optimize PyTorch distributed training on GPUs but we understand that the exact impact that Reduction Server could have on the training time and throughput depends on the characteristics of your training workload.
In this article you learned how the Vertex AI Reduction Server architecture provides an AllReduce implementation that minimizes latency and data transferred by utilizing a specialized worker type that is dedicated to gradient aggregation. If you’d like to try out a working example from start to finish, you can take a look at this notebook, or take a look at this video to learn more about distributed training with PyTorch on Vertex AI.
It’s time to use Reduction Server and run some experiments of your own!
Read More for the details.
On the eve of Mobile World Congress 2023 in Barcelona, we announced that we can now run radio access network (RAN) functions as software on Google Distributed Cloud Edge (GDC Edge), providing cloud service providers (CSPs) with a common and agile operating model that extends from the core of the network to the edge, for a high degree of programmability and flexibility, as well as low operating expenses. Following that announcement, we outlined our Cloud RAN solution that comprises four key pillars, including partnerships.
We embarked on our strategic partnership with Nokia for 5G Cloud RAN a while ago. Since then, our teams have been collaborating to develop, integrate and validate our products to produce an end-to-end solution to help address our customer’s RAN evolution needs. During this year’s MWC, we showcased 5G Cloud RAN solutions with Nokia DU and CU containerized software running on GDC Edge.
In February, our teams reached a remarkable milestone that was showcased at MWC: the Nokia AirScale Cloud RAN containerized DU and CU running on GDC Edge, powered with a Nokia Cloud RAN SmartNIC L1 inline accelerator. The teams were able to successfully execute the first end-to-end L3 data call on this testbed. The testbed setup used a single and a real UE (User Equipment), with a 100MHz 5G cell sector deployed in TDD (Time Division Duplex) mid-band in a Standalone (SA) configuration, along with a 5G SA core. We achieved robust end-to-end data speed with consistent low latency and jitter, a testament to the solid foundation of our joint solution.
Achieving the performance and power utilization on 5G Cloud RAN deployments that’s on par with traditional RAN equipment is a priority for CSPs. The performance of Nokia’s AirScale Cloud RAN solution running on GDC Edge with Nokia Cloud RAN SmartNIC will help customers reach those goals, while also delivering feature parity between traditional and Cloud RAN deployments.
“A collaborative approach to Cloud RAN means that we can drive efficiency, innovation, openness, and scale by jointly delivering competitive advantage to organizations that are embracing Cloud RAN. Together with Google Cloud we have made a successful L3 data call, which is a great achievement and huge progress towards driving consistent performance across cloud infrastructure. We look forward to continuing to work together in this journey,” said Pasi Toivanen, Head of Cloud Partnerships, Nokia.
“This is an exciting milestone in Nokia and Google Cloud’s collaboration and takes us a big step forward in offering CSPs the performance, energy savings and value they need of a cloud-native Cloud RAN solution,” said Don Tirsell, Head of Global Telecommunications Partnerships, Google Cloud. “We look forward, together with Nokia, to engaging CSPs in helping them prove out and accelerate their RAN modernization journey.”
We are on an exciting journey together, and over the coming months we will share more updates with you. To learn more about everything Google Cloud is doing to help CSPs please visit our Telecommunications industry page or reach out to your Google representatives.
Read More for the details.
Google Cloud provides many layers of security for protecting your users and data. Session length is a configuration parameter that administrators can set to control how long users can access Google Cloud without having to reauthenticate. Managing session length is foundational to cloud security and it ensures access to Google Cloud services is time-bound after a successful authentication.
Google Cloud session management provides flexible options for setting up session controls based on your organization’s security policy needs. To further improve security for our customers, we are rolling out a recommended default 16-hour session length to existing Google Cloud customers.
Many apps and services can access sensitive data or perform sensitive actions. It’s important that only specific users can access that information and functionality for a period of time. By requiring periodic reauthentication, you can make it more difficult for unauthorized people to obtain that data if they gain access to credentials or devices.
There are two tiers of session management for Google Cloud: one for managing user connections to Google services (e.g. Gmail on the web), and another for managing user connections to Google Cloud services (e.g. Google Cloud console). This blog outlines the session control updates for Google Cloud services.
Google Cloud customers can quickly set up session length controls by selecting the default recommended reauthentication frequency. For existing customers who have session length configured to Never Expire, we are updating the session length to 16 hours.
This new default session length rollout helps our customers gain situational awareness of their security posture. It ensures that customers did not mistakenly grant infinite session length to users or apps using Oauth user scopes. After the time bound session expires, users will need to reauthenticate with their login credentials to continue their access. The session length changes impact the following services and apps:
Any other app that requires Google Cloud scopes
The session control settings can be customized for specific organizations, and the policies apply to all users within that organization. When choosing a session length, admins have the following options:
Choose from a range of predefined session lengths, or set a custom session length between 1 and 24 hours. This is a timed session length that expires the session based on the session length regardless of the user’s activity.
Configure whether users can use just their password, or are required to use a Security Key to reauthenticate.
The session length will be on by default for 16 hours for existing customers and can be enabled at the Organizational Unit (OU) level. Here are steps for the admins and users to get started:
Admins: Find the session length controls at Admin console > Security > Access and data control > Google Cloud session control. Visit the Help Center to learn more about how to set session length for Google Cloud services.
End users: If a session ends, users will simply need to log in to their account again using the familiar Google login flow.
If your organization uses a third-party SAML-based identity provider (IdP), the cloud sessions will expire, but the user may be transparently re-authenticated (i.e., without actually being asked to present their credentials) if their session with the IdP is valid at that time. This is expected behavior as Google will redirect the user to the IdP and accept a valid assertion from the IdP. To ensure that users are required to reauthenticate at the correct frequency, evaluate the configuration options on your IdP and review the Help Center to Set up SSO via a third party Identity provider.
Some apps are not designed to gracefully handle the reauthentication scenario, causing confusing app behaviors or stack traces. Some other apps are deployed for server-to-server use cases with user credentials instead of the recommended service account credential, in which case there is no user to periodically reauthenticate. If you have specific apps like this, and you do not want them to be impacted by session length reauthentication, the org admin can add these apps to the trusted list for your organization. This will exempt the app from session length constraints, while implementing session controls for the rest of the apps and users within the organization.
Available to all Google Cloud customers
Gradual rollout starting on March 15, 2023.
Help Center: Set session length for Google Cloud services
Help Center: Control which third-party & internal apps access Google Workspace data
Help Center: Use a security key for 2-Step Verification
Read More for the details.
To create a clean energy future, buyers and sellers must transact faster and smarter. As a long-time champion of clean energy, we have learned many lessons over the decades — and one of the most significant is we’re not moving fast enough. That’s why, in collaboration with LevelTen Energy and with feedback from energy sellers, we piloted a new approach that reduces the time to negotiate and execute a clean energy power purchase agreement (PPA) by roughly 80%.
We’re hopeful this new approach will give clean energy buyers and sellers useful new options for negotiating PPAs and, more importantly, enable all organizations that want to decarbonize to join us on the journey to 24/7 carbon-free energy.
Most carbon free energy (CFE) transactions are either negotiated bilaterally between parties, or more commonly, facilitated by a Request for Proposal (RFP) with the ultimate goal of signing PPAs. These negotiations are lengthy, complex, and unique to every buyer and seller. Even before buyers and sellers enter into negotiations, traditional RFPs create an information imbalance between parties. Uncertainty and unpredictability create delicate conditions that can increase the complexity of negotiations between parties down the line.
Perhaps most importantly, RFP conversations can drag on for anywhere from ten months to more than a year, increasing the cost for both buyers and sellers. The long time period is often the result of complex negotiations and can also be due to resource constraints. Most buyers have limited personnel working on procuring clean energy through PPAs, and sellers have finite time to dedicate to any one project.
The length of traditional RFP negotiations creates additional challenges for sellers, who face a high risk associated with project development, as they often need to finalize the PPA before they can continue to invest in the project. Also, traditionally, sellers do not know the final contractual details before offering a price.
All of these conditions create unnecessary barriers for both buyers and sellers to develop clean energy portfolios, slowing the pace of clean energy deployment and hindering efforts to displace fossil fuels, reduce pollution in communities, and mitigate climate change.
In partnership with LevelTen, Google has established a new, scalable procurement approach that breaks down the barriers of traditional RFPs to benefit both buyers and sellers while accelerating grid decarbonization. We have achieved this by zeroing in on two areas of improvement — the RFP process and the PPA contract itself.
The RFP process: We expanded upon the LevelTen Energy Marketplace user experience to generate a new kind of RFP.
First, this provides sellers the flexibility to customize the ways in which they offset risk, and requires them to agree to those terms when submitting a proposal.
Second, this creates a transparent and reliable way for sellers to verify how their offers are evaluated in real time, and reduces the risk associated with entering into negotiations, only to determine after time that they can’t proceed with the terms.
Third, it enables sellers to create pricing based on the final contractual details, as opposed to speculating on future terms that are likely to change during the negotiations.
The PPA itself: We drew from our experience negotiating PPAs since 2010 and used market feedback from sellers and LevelTen to design a PPA that’s already risk-balanced between the buyer and seller, eliminating the need for protracted negotiations.
These changes significantly shorten the start-to-finish time of executing clean energy PPAs. Whereas traditional deals can take more than a year, this innovative approach has already enabled contracting for new “additional” clean energy in just two months.
A significantly faster and easier RFP process for clean energy deals will help us work toward our own ambitious goal of operating on 24/7 carbon free energy. It also stands to do much more for the power industry broadly by helping standardize the buying and selling of clean energy. This makes it easier for all types of end-use buyers to purchase new additional carbon free energy and expands the marketplace to create access for smaller sellers to participate.
Innovation like this — reducing the complexity of buying and selling clean energy — will accelerate the decarbonization of grids worldwide. It’s something the International Energy Agency has made clear we must do to meet global net zero targets, and it’s an urgent requirement to keep the planet under 1.5°C of warming.
This is just the beginning. Together, Google and LevelTen aim to make the approach widely available to buyers and sellers later this year. Over time, we’ll continue to make improvements that enable this to become a new standard for clean energy procurement. The clean grid of the future may be sooner than we think.
Read More for the details.
Application Auto Scaling customers can now use arithmetic operations and mathematical functions to customize the metrics used with Target Tracking policies. Target Tracking works like a thermostat – it continuously changes the capacity of the scaled resource to maintain the scaling metric at the customer-defined target level.
Read More for the details.
Starting today, customers can update Apple macOS operating system from within the guest environment on their Amazon Elastic Compute Cloud (EC2) M1 Mac instances. With this capability, customers can now update their guest environments to a specific or latest macOS (non-beta) version without having to tear down their existing macOS environments, launch new instances, and reinstall libraries, tooling, and dependencies such as Apple Xcode.
Read More for the details.
Starting today, the general-purpose Amazon Elastic Compute Cloud (Amazon EC2) M6a instances and compute-optimized Amazon EC2 C6a instances are available in South America (Sao Paulo) region. These instances are powered by third-generation AMD EPYC processors with an all-core turbo frequency of up to 3.6 GHz, and they are built on the AWS Nitro System. M6a instances deliver up to 35% better price performance than comparable M5a instances, while C6a instances deliver up to 15% better price performance than comparable C5a instances. Both instances offer 10% lower cost than comparable x86-based EC2 instances.
Read More for the details.
We are excited to announce the launch of the updated Amazon GameLift console experience to help customers more intuitively and efficiently manage and scale their game servers. Amazon GameLift is a fully managed solution that allows you to manage and scale dedicated game servers for session-based multiplayer games. With this release, customers can more easily monitor and manage their game server instances and settings from a single interface.
Read More for the details.
Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting today, AWS customers can use the Amazon Kendra Confluence Server Connector to index and search documents from Confluence Server.
Read More for the details.
Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting today, AWS customers can use the Amazon Kendra SharePoint OnPrem Connectors (2013, 2016, 2019) and Subscription Edition to index and search messages from SharePoint OnPrem.
Read More for the details.
Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting today, AWS customers can use the Amazon Kendra Confluence Cloud Connector to index and search documents from Confluence Cloud.
Read More for the details.
Amazon OpenSearch Service announces security analytics that provides new threat monitoring, detection, and alerting features. These capabilities help you to detect and investigate potential security threats that may disrupt your business operations or pose a threat to sensitive organizational data.
Read More for the details.