Azure – Azure Batch task authentication token will be retired on 30 September 2024
End of life announcement of Azure Batch task authentication token on 30 September 2024
Read More for the details.
End of life announcement of Azure Batch task authentication token on 30 September 2024
Read More for the details.
You’re receiving this email because you currently use the 2017-03-01-GA API of Azure Container Registry.
Read More for the details.
AWS Systems Manager Fleet Manager enables customers to connect to their SSM managed instance through browser based RDP from the SSM Fleet Manager console without opening any inbound ports to the public or any private IPs. In addition to the default resolution of 720p, you can now select resolutions of 600p, 900p and 1080p for your RDP sessions.
Read More for the details.
Today, AWS announces the availability of AWS Fargate for Amazon ECS Windows containers in the AWS GovCloud (US) Regions. This feature simplifies the adoption of modern container technology for Amazon ECS customers by making it even easier to run their Windows containers on AWS.
Read More for the details.
Amazon Inspector is now available in four additional regions including Africa (Cape Town), Asia Pacific (Osaka), Asia Pacific (Jakarta), and Europe (Zurich). You can now continuously monitor your Amazon EC2 instances, AWS Lambda functions, and container images residing in Amazon Elastic Container Registry (ECR) for software vulnerabilities and unintended network exposure in these regions.
Read More for the details.
Today, we are announcing the availability of Route 53 Resolver Query Logging in the Israel (Tel Aviv) Region. Route 53 Resolver Query Logging enables you to log DNS queries that originate in your Amazon Virtual Private Clouds (Amazon VPCs). With query logging enabled, you can see which domain names have been queried, the AWS resources from which the queries originated – including source IP and instance ID – and the responses that were received.
Read More for the details.
Today, AWS IoT Core for LoRaWAN announces the general availability of public network support for LoRaWAN-based Internet of Things (IoT) devices. With this update, you can now connect your LoRaWAN devices to the cloud using publicly available LoRaWAN networks provided by Everynet, a LoRaWAN network operator, without deploying and operating a private LoRaWAN network. The public LoRaWAN network is provided as a service and operated by Everynet, and by adding this public network support, customers can choose from within the AWS console to use Everynet’s network.
Read More for the details.
Enterprises are embarking on cloud transformations for scalable and secure infrastructure, better resource availability, greater return on investment, and improved user experiences. Based on their requirements, they embrace different cloud adoption strategies such as multi or hybrid cloud environments. For those enterprises that are looking to adopt SAP on Cloud, they are exploring opportunities that will enable a seamless transformation. But the lack of automation, continuous integration and continuous deployment (CI/CD) pipelines, and DevOps are a few challenges that enterprises face in this transformation journey.
In uncertain and changing market conditions, running on-prem SAP environments enterprises face a number of challenges including , scalability issues that lead to increased CapEx, inability to meet demands of innovation, everchanging infrastructure needs and cost overruns, time-consuming and error-prone manual tasks, and poor security.
Tata Consultancy Services (TCS) has developed SAP Cloudify, an automation framework that addresses these on-premises challenges and accelerates migration to the cloud. This framework complements Google Cloud’s deep investments in flexible and customizable SAP deployment, automation, and operational tools that includes Workload Manager for SAP Solutions. TCS SAP Cloudify is based on the principles of:
AutomationOptimizationReduce and reuse
The key features of this solution are:
Secure infrastructure built with supervised automationA single integrated pipeline to provision VMs, cloud resources and the SAP applicationAuto-generated passwords for key roles with secure storageCustomizable scripts to adapt to multiple scenariosAutomation-led migration on Google Cloud from assessment to execution.Tools, best practices, and automation scripts for efficient implementation, migration, and operation of SAP applications.
TCS SAP Cloudify provides automation-led migration on Google Cloud from assessment to execution working in collaboration with existing Google Cloud migration services. The solution comprises of tools, best practices, automation scripts that enable enterprises to efficiently implement, migrate and operate SAP applications with:
Improved productivityImproved system qualityEnhanced agility and securityReduced TCODowntime minimizationIncreased process automationReal-time business insightsReduce time-to-market
Let’s take a closer look at TCS SAP Cloudify.
The cloud journey starts with the discovery phase: TCS analyzes the customer landscape with predefined documents, which help in understanding the landscape, challenges, pitfalls, decision on move groups and appropriate sizing of the SAP system. This phase also helps identify network bandwidth requirements and other cloud resource requirements.
After the assessment phase, where the SAP capacity planning and move groups are decided, we move on to the migration phase, which starts with provisioning VMs, installation of the OS and SAP applications. TCS SAP Cloudify has built in a set of reusable automation assets on Google Cloud to tailor SAP applications for greenfield and brownfield SAP migrations based on customer requirements.
TCS SAP Cloudify helps with assessment, planning and automation-led migration execution using the pre-built CI/CD pipelines powered by Terraform and Ansible scripts for migrating SAP applications to on Google Cloud.
The figure below illustrates the three automation scenarios:
Installations /Implementations: In greenfield implementations, automation scripts help to stand up the SAP applications faster based on sizing and environment requirements (Dev, QA,Prod & DR). The SAP Cloudify framework complements existing Google Cloud automation scripts, targeting customization for complex distributed SAP installations, or a larger landscape with multi-regional requirements.
TCS SAP Cloudify’s automation scripts provide a rich variety of options for multiple SAP applications. The scripts provide the flexibility for easy customization and can accommodate multiple scenarios like standalone, distributed, custom SAP applications, database installations, etc. Inputs to these automation scripts are provided using a .csv file and deployed using the Google Cloud CLI.
The scenarios available for greenfield implementations on Google Cloud include:
VM BuildHANA DB InstallSAP S/4HANASAP ECC on HANASAP BW/4HANAPost processingSAP BW on HANASolution ManagerSAP BODS, BOBJWeb DispatcherHANA Addons – FL/PAL
These automation scripts are robust, dynamic, and secure, to help meet customer requirements. Moreover, post-installation activities like SPAU and SPDD are also automated, including automated bulk transport requests.
Migration: In different migration scenarios, enterprises choose the relevant migration option based on their needs, criticality and downtime tolerance. The TCS SAP Cloudify framework includes choosing the right SAP certified VMs, adding required disk space and other network components, and installing SAP applications including a secondary system for HANA replication. This helps reduce the overall time to build and migrate SAP applications, while minimizing downtime. The scenarios that are available for migration with TCS SAP Cloudify are high availability (HA) and failover setup, export and import, backup and restore, retrofit transport movement, and system copy or system refresh of SAP systems.
TCS SAP Cloudify aims to streamline the migration steps — export/import, backup/restore and HANA system replication — by automating the required steps. Working with Google Cloud, TCS SAP Cloudify helps achieve error-free restores with reduced complexity that are 30% faster than regular manual restores.
Application Managed Services (AMS) support: TCS SAP Cloudify helps organizations efficiently manage their SAP applications on Google Cloud. SAP Cloudify offers automation scripts for the most common operational activities such as stop/start of SAP applications, kernel upgrade, HANA HSR, client copy and backup/restore.
TCS’ solution improves resource efficiency and lowers issues associated with activity downtime, allowing enterprises’ resources to focus on other critical activities in application support.
For over two decades, TCS’ partnership with SAP has helped enterprises in their business growth and digital transformation. TCS’ SAP-led business transformation experience is steeped in deep industry and technology expertise and trusted contextual knowledge, enabling enterprises to rebuild the digital core, reinvent business processes, and tackle business challenges[MJ1] [RS2]. Our best-of-breed offerings, advisory, and delivery expertise enables technology-driven business transformation and return on investments for enterprises. Moreover, TCS’ Google Business Unit provides end-to-end services on Google Cloud for cloud migration, application and data modernization, managed services, artificial intelligence (AI), SAP on the cloud, digital transformation, and industry-specific business innovation.[MJ3] [RS4]
Get ready to implement TCS SAP Cloudify on Google Cloud for a smooth migration of your SAP systems, irrespective of their size, the complexities of your existing journey, or your budget challenges. To learn more, please reach out to GBU.marketing@tcs.com or talk to a Google Cloud sales specialist.
Read More for the details.
Over the past few weeks, we had the unique opportunity to connect with many of you at VMware Explore in Las Vegas, and Google Cloud Next in San Francisco. One common theme that emerged from these conversations is the continued need to have a cost-effective, secure, and non-disruptive path to the cloud, especially for VMware-based workloads which are often at the core of your IT footprint.
Google Cloud VMware Engine provides one of the fastest ways to lift and transform your existing VMware estate into Google Cloud. VMware Engine is an enterprise-grade platform with unique capabilities like 4 nine’s of uptime SLA in a single zone, 100 Gbps of dedicated, east-west networking, native VPC integration and more. Today’s post provides a summary of new and recent capabilities that will enable you to migrate and run your VMware workloads on a cloud-first VMware platform.
VMware Engine’s ve2 node platform will be offered with many flexible combinations of CPU and storage, with high memory, enabling customers to optimize their TCO with the right configuration for their business. Based on next-generation CPU (3rd Generation Intel® Xeon® Scalable Processors (formerly named Ice Lake) with DDR4 RAM with all-NVMe drives, ve2 nodes will also support large cluster sizes of 32 nodes and 100+ node private clouds. They will also continue providing 4 9’s of uptime SLA, in a single zone. For higher availability, stretched private clouds will also be supported.(Preview in us-east4 in early Q4’2023) Our first node type within this new family is ve2-standard-128 which offers more than ~2.7X the RAM (2048 GB), ~1.8X the CPU (64 cores, 128 hyperthreaded cores) and ~1.3X the storage: 25.6 TB NVMe raw data storage at a compelling price point.
Over the past year, we have increased our global presence to 19 regions, with the most recent ones being Tel Aviv, Turin, Santiago and Delhi.
(Preview Q3’2023 in select regions): Storage Only Nodes overcome the limitation of HCI architectures by letting you add storage capacity without having to pay for compute. This lowers TCO and optimizes the infrastructure to match the workload needs much better. Storage Only Nodes deliver a lower cost option to expand storage capacity of a cluster without adding cores/memory in storage capacity constrained clusters with the same 4 9’s of uptime SLA for the cluster.Recent developments also include support from Google Cloud Filestore as datastores and Google NetApp Cloud Volumes for in-guest storage use to cater to storage-intensive environments.Filestore High Scale and Filestore Enterprise are VMware-certified as an NFS datastore with Google Cloud VMware Engine. Similar to Filestore High Scale and Enterprise, the marketplace service NetApp Cloud Volumes can be leveraged as anNFS Datastore for the capacity-hungry VMs.
Newly introduced Terraform support for PC CRUD operations enables Infrastructure as Code automation for private cloud provisioning activities.(Preview Q3’2023) Advancements in networking are further simplifying the VMware networking architecture and experience in VMware Engine. With zero-config VPC peering during private cloud creation, as well as increasing the limits on the number of peerings allowed, it radically simplifies the task of building a connected VMware Private Cloud while enabling a variety of networking topologies. The addition of native Cloud DNS support for bi-directional DNS resolution for both management and workload resolution and support for more than 1 consumer DNS binding will also deliver enterprise needs in a simple and elegant fashion.(Preview Q3’2023) With more functionality delivered via Google Cloud API and CLI will enable users to programmatically manage their Google Cloud VMware Engine environments — these include API/CLI functions for managing the new networking model, network peering, external access rules and external IP service, consumer DNS and more.(Preview Q3’2023) Full Google cloud console experience for GCVE enables customers to manage their VMware Engine environments directly inside the console without the need to open another tab. In addition, you would view logs in the log explorer.
Over the past few months, new security capabilities have been added to VMware Engine.
Fine-grained (per-action) access control capabilities into our platform for those actions performed via API/CLI. You can select from predefined roles and custom roles in addition to basic roles — these predefined or custom roles have more fine-grained permissions to perform specific actions that apply only to VMware Engine. This way you have more control and flexibility over access control. The same will apply to those actions performed via the console once it becomes available.VPC Service Controls let you define a security perimeter for your VMware Engine resources to reduce data exfiltration risks. The service perimeter limits exporting and importing of resources and their associated data to within the defined perimeter. VMware Engine now supports a VPC Service Controls guided opt-in and policy export that enables you to attach VMware Engine services to a new or existing VPC Service Controls perimeter.More system transparency with support for ESXi log forwarding and enabling auditable procedures with customer controlled access elevation on customer workloads.(Preview) Adding more options for key management for vSAN encryption in GCVE with customer managed keys for Cloud KMS. This builds upon the already available capabilities of external 3P KMS with customer managed keys and Google Cloud KMS with Google managed keys.
We recently announced GCVE Protected, a new Google Cloud offering that offers bundled pricing for both Google Cloud VMware Engine and Google Cloud’s Backup & DR Service. With GCVE Protected, you can protect all your virtual machines on a VMware Engine node with our first-party backup and DR software for only an incremental add-on cost per VMware Engine node, giving you centralized, fast, and cost-efficient backup and recovery capabilities for your VMware Engine VMs.
This wraps up the updates for this time around. Please stay tuned for more and be sure to bookmark the GCVE release notes for updates. You can learn more about these recent updates by viewing our on-demand sessions from VMware Explore US as well as our session on GCVE at Google Cloud Next’23. Additionally, if you are looking to get started but need some guidance, be sure to check out our Rapid Migration Program (RaMP), or if you’re ready to rock n’ roll, click here to get started with a free discovery and assessment of your current IT landscape so we can help craft the right migration plan for your business.
Read More for the details.
The Cloud SQL Node.js connector is the easiest way to securely connect your Node.js application to your Cloud SQL database. With today’s General Availability announcement, we now recommend using the Cloud SQL Node.js connector.
Cloud SQL has two types of connectors:
Language-specific connectors – Available for Java, Python, Go, and now Node.jsCloud SQL Auth proxy – A good choice when a language-specific connector is not available
These connectors come with significant advantages over a traditional database connection. We laid out a few of the benefits of using a connector in our previous blog post, Understanding Cloud SQL connectors. With code-level control, the Cloud SQL Node.js connector is the most convenient way to securely connect your application to your database.
The Cloud SQL Node.js Connector README file includes usage documentation for PostgreSQL, MySQL, and SQL Server showing how to integrate the connector into your application. To show just how easy it can be, we’ve released a Codelab for How to connect a Node.js application on Cloud Run to a Cloud SQL for PostgreSQL database.
In a single file, this application:
accepts HTTP requestssecurely connects to the Cloud SQL for PostgreSQL database using a service account with automatic IAM database authenticationstores the time of the HTTP request in the databasereturns the timestamp of the last five visits
The Node.js connector simplifies the experience of connecting your application to your database so that developers can focus on building application features. Practices like automatic IAM authentication — which allows you to avoid using and rotating passwords — improve your application security, save you time today, and save you time in the future. To get started with the connector and see an example of those recommended practices in action, follow the link to the Codelab.
Read More for the details.
Editor’s Note: FLUIDEFI® is a SaaS platform for institutional investors in the decentralized finance space — a complex industry to navigate because of its volatility, fragmented nature, and lack of clear governance. By providing risk qualification, portfolio modeling, and other services, FLUIDEFI fulfills an important market need. Migrating to AlloyDB has netted FLUIDEFI 3X gains in processing speed and 60% lower cost for a similar instance size.
In the complex world of decentralized finance (DeFi), institutional investors and audit firms face a lack of reliable and standardized real-time data from a recognized authority. Complicating this further is the lack of available risk ratings and audit trails.
Plugging the gap in trusted sources for accurate data, actionable insights, risk rating, and audit trails requires serious number crunching. It takes a massive amount of data collection, storage, and calculations. This is where we step in.
Our platform offers a comprehensive solution that simplifies qualifying risks, building portfolio models, testing investment strategies, tracking investments, and creating auditable financial reports. We do this by using the latest algorithms and risk models to provide real-time data and insights to crypto investors and traders so they can make informed decisions.
We also offer risk rating and audit-trail features that help investors and traders track their investments and monitor their performance over time. These features help them evaluate the effectiveness of their investment strategies and adjust them to optimize their returns.
Our feature-rich solution enables several critical use cases for our customers: decision-making for investing in alternative assets; tracking the performance of these assets through our portfolio manager, data visualizations, and tracking mechanisms; tax reporting; and financial auditing through two independent sources for all verified blockchain (on-chain) transactions.
Our solution uses both a graphical user interface (GUI) and RESTful API. We process a massive amount of data — our databases hold over 2 billion records and use more than 40 real-time ingestion/computation services that demand fast response times. To meet the heavy workload demand and to reduce our management overhead with the database infrastructure, we first started with Amazon Aurora. Despite tuning the instances and scaling, we were not able to achieve the throughput from the service that we had hoped for. It was also unable to keep up with the addition of new protocols and chains, and we realized that we would take a hit on both cost and efficiency as we scaled. We considered our options based on the most critical factor — performance on write speed because of the amount of data we ingest on an hourly basis from blockchains, and turned to AlloyDB for PostgreSQL.
An early proof of concept (POC) gave us complete confidence in AlloyDB as our database of choice. In fact, the POC results were so robust that we followed up with a full migration within weeks of AlloyDB becoming generally available.
Coming from another cloud provider, it was easy and intuitive for our team members to familiarize themselves with Google Cloud services like container management, virtual machines, and AlloyDB. Thanks to Database Migration Service, we managed to migrate all our data with zero downtime for our customers. Additionally, the full compatibility that AlloyDB offers for PostgreSQL also meant that we migrated our app as-is without any code changes, which saved us a lot of time and effort.
We’ve seen a dramatic improvement in terms of cost for the same number of virtual CPUs and memory because we don’t incur any I/O charges with AlloyDB. AlloyDB’s tiered storage and scale-out architecture improved operational performance by meeting or exceeding our throughput expectations. Specifically, our previous cloud database took 23 minutes to recompute a complex materialized view. With AlloyDB, the same materialized view now refreshes in just four minutes while using the same memory and number of CPUs but with an even larger dataset. We also increased redundancy and reliability thanks to high availability instances. Unlike our previous cloud provider, AlloyDB’s pricing was transparent and predictable, and we were never surprised with extra charges we didn’t expect. By creating a database cluster for each blockchain we support, we can isolate blockchains from each other, so our reliability has never been better. Most importantly, the faster write operations and lowered lag time between the writer and reader instances enabled queries much closer to real-time.
Now we can scale to more blockchains and more decentralized exchanges without worrying about slowing data delivery to our customers. In fact, since migrating to Google Cloud, we have boosted our platform’s response speed by 3x, reduced our costs by 60 percent, and we are confident we can support more transactions per second. We are now integrating four new decentralized exchanges and a blockchain. This allows institutional investors using FLUIDEFI® to diversify safely into decentralized finance.
Our service demands a high-performance database that can handle large volumes of data in real time. AlloyDB is critical to our success because it can store and process vast amounts of data, perform complex queries, and generate real-time insights that help investors and traders make informed decisions. Its advanced features, like high availability, scalability, and security, make it ideal to support our data processing and analytics needs. We see AlloyDB as an essential tool to achieve our vision — to capture a significant share of the rapidly growing DeFi market.
Read More for the details.
Welcome to the second Cloud CISO Perspectives for September 2023. This month, I’m turning the mic over to my colleague Eric Brewer, Google Cloud’s vice president of infrastructure and Google Fellow, to explain the importance of this year’s Securing Open Source Software Summit and why securing open source code is one of the most crucial tasks we face.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
By Eric Brewer, Google Fellow, vice president, infrastructure, Google Cloud
Earlier this month Phil and I had the great pleasure of participating in the 2023 Securing Open Source Software Summit, hosted by the Open Source Security Foundation (OpenSSF) in Washington, D.C. We joined dozens of leading experts and practitioners representing the tech industry, leading open source foundations, and several U.S. government agencies to recap the progress made in addressing open source risks over the previous 18 months, and to identify priorities for future collaboration.
Eric Brewer, Google Fellow, VP, infrastructure, Google Cloud
The first Open Source Software Security Summit was convened by the White House in the immediate aftermath of the Log4Shell vulnerability. Log4Shell was a vulnerability of epic proportions because it was easy to exploit — a perfect 10.0 on the severity scale. It sent tens of thousands of security practitioners scrambling to find and patch vulnerable systems in what was arguably one of the largest collective incident response efforts in Internet history.
While the mood at this year’s summit was somewhat lighter than at the prior gathering, it was nonetheless an opportunity for reflection. The key lesson to come from Log4Shell is that open source security is a lot like environmental conservation. That’s because open source software is a “public good” which is an economic term that means everyone can benefit and we don’t need to compete for the resource. By enabling software developers to freely reuse and build on each others’ contributions, open source ecosystems have proven a key driver of digital innovation in tech, in public services, and in sectors like financial services and healthcare. Similarly, one developer’s use of open source code doesn’t meaningfully reduce open source availability to others.
But just as human activity can strain the environment and natural ecosystems, consumption of open source projects without corresponding investments in maintenance can prove unsustainable — and leave key projects more vulnerable. We’re all stewards of healthy open source ecosystems through the time we spend reviewing others’ code, and when necessary, cleaning it up. Large organizations, such as government agencies and tech companies like Google have an outsized — though by no means exclusive — role to play in that effort.
I have been making the “public good” argument for a few years now, but the big change is now the U.S. government is also using this framing — you can read in the new U.S. open-source security roadmap covered below. It’s great to see that the U.S. has an open-source security strategy, let alone one that lines up well with the “public good” view.
How Google is championing open source security
Even before Log4Shell, Google was making significant investments to help secure key open source ecosystems in response to a troubling rise in software supply chain attacks. Although Google had done great work on the basics (such as support for Linux and OpenSSL) for more than a decade, I started worrying about supply chain attacks in 2018. In my role as tech leader for Kubernetes in particular, I realized how many risky dependencies we were collectively using.
This led me to create the OpenSSF in 2019, working with Microsoft and others to get an industry-wide approach to these very difficult problems. Unfortunately, we got off to a rocky start due to the arrival of a very distracting pandemic. The SolarWinds attack in early 2020 was exactly the kind of attack I was worried about, and its arrival gave the OpenSSF an obvious burst of energy and interest (and funding!).
In August 2020, Google helped to relaunch OpenSSF and committed $100 million in funding to help open source maintainers address vulnerabilities. In its first year, we partnered with OpenSSF to launch tools including Scorecard, which helps developers identify trustworthy libraries, and SLSA, a framework for hardening build and release processes.
Following the first summit in 2022, Google announced the creation of an “Open Source Maintenance Crew,” a team of dedicated engineers who work closely with the maintainers of vital open source projects. As of May 2023, members of that team had contributed security improvements to more than 180 widely-used projects. We also partnered with OpenSSF and other tech companies to launch Alpha-Omega, a program aimed at speeding resources and expertise to several high-impact projects and automated tooling for thousands more.
At Google Cloud we’re also taking steps to help our customers use open source securely through our Assured OSS solution. With Assured OSS, Google Cloud is curating more than 1,000 of the most popular Java and Python packages, offering organizations of all sizes access to the same trusted libraries Google’s own engineers rely on. Each library is subject to ongoing fuzz testing and scanning, is signed by a unique Google public-private key, and includes available software bills of materials (SBOMs).
Public-private partnerships to secure open source ecosystems
One of the biggest takeaways from this year’s summit was the rapid evolution of U.S. government policy around open source security in terms of strategy, resources, and expertise to carry out that policy. The summit drew participation by Deputy National Security Advisor Anne Neuberger (who convened the first White House summit and has been an avid proponent of all this good work,) Acting National Cyber Director Kemba Walden, and CISA Executive Assistant Director Eric Goldstein, as well as many others from across government.
In the 18 months since Log4Shell, government agencies have taken a number of steps to be more engaged in open source security:
In March, U.S. National Cybersecurity Strategy committed U.S. federal agencies to invest in developing secure open source tools and frameworks and hinted at liability protections for small, independent open source developers;Earlier this year the Office of the National Cyber Director (ONCD) launched the Open Source Software Security Initiative (OS3I) in partnership with the White House, the U.S. Cybersecurity and Infrastructure Security Agency (CISA), and several other agencies, to focus on memory safety issues; andOn the first day of the summit itself, Goldstein announced the launch of CISA’sOpen Source Security Roadmap outlining the agency’s plans to launch a federal open source program office (OSPO), continue mapping open source risks, drive adoption of SBOMs, and expand access to security education and best practices.
This is by no means an exhaustive list. I’m personally pleased to see that the federal government is signaling its commitment to making meaningful contributions to open source security, and to partnering with foundations like OpenSSF and companies like Google to expand the conversation to other sectors and organizations that might otherwise lack the resources to manage open source effectively.
Some of the most promising areas for public-private collaboration include:
Championing adoption of memory-safe languages: OS3I has already signaled that its top priority will be to develop strategies for transitioning away from memory-unsafe programming languages, such as C and C++. Large technology companies around the world, including Google, are similarly in the midst of a multi-year journey toward adoption of languages like Rust. OS3I could prove an essential forum for exchanging information and lessons learned.Mapping sector-specific open source dependencies: Just as projects like OpenSSF’s Alpha-Omega are geared toward speeding resources to remediate vulnerabilities in a small number of high-risk dependencies, a next step could be to apply this strategy on a sector-by-sector basis. There are significant opportunities for companies like Google to collaborate with sector risk management agencies (SRMAs) and their corresponding ISACs to help critical infrastructure operators “shift left” by prioritizing supply chain security in addition to threat sharing and incident response planning.Security education and training: Lack of access to high-quality, low-cost security education for software engineers remains a key barrier to building safer open source ecosystems. Although organizations like ISC2 have succeeded in training tens of thousands of professionals in recent years, there’s much more to be done. In the last year, Google launched a number of new resources in an effort to help address that gap through the new Google Cybersecurity Certificate program through hands-on training provided through Google-sponsored Cyber Clinics.
We look forward to sharing our views on ways to use public-private partnerships to strengthen open source ecosystems in our response to ONCD’s Open Source Software Security RFI.
Here are the latest updates, products, services, and resources from our security teams so far this month:
How leaders can reduce risk by shutting down security theater: Security theater in the cloud is a problem, but not an insurmountable one. To reduce the risk your organization faces, embrace more practical security — and leave the theatrics behind. Read more.Introducing the unified Chronicle Security Operations platform: Chronicle’s latest update unifies our SOAR and SIEM solutions, integrates Mandiant’s attack surface management technology, and offers more robust application of threat intelligence. Read more.Confidential VMs on Intel CPUs: Your new intelligent defense: Through our partnership with Intel, Google Cloud is extending Confidential VMs on new C3 machines to use 4th Gen Intel Xeon Scalable CPUs and Intel TDX technology. Read more.New custom security posture controls and threat detections in Security Command Center: Security Command Center now allows organizations to design their own customized security controls and threat detectors for their Google Cloud environment. Read more.
Policy Controller violations now in Security Command Center: Policy Controller enforces programmable policies for Google Kubernetes Engine to help customers with security, governance, and compliance guardrails for their workloads. Read more.
How to scale reducing your attack surface: We’re unveiling new capabilities to Mandiant Attack Surface Management (ASM) that enable an outcome-focused and risk-based approach to security. Read more.Backchannel diplomacy: APT29’s rapidly-evolving diplomatic phishing operations: In the first half of 2023, Mandiant and Google’s Threat Analysis Group (TAG) have tracked an increase in the frequency and scope of APT29 phishing operations, often centered on foreign embassies in Ukraine. Read more.Bolster government infrastructure with state and local cybersecurity grants: A new round of U.S. government funding is available to help protect information systems owned or operated by, or on behalf of, State, Local, Tribal, and Territorial (SLTT) governments. Read more.
System hardening at Google scale: New challenges, new solutions: Hardening systems has changed a lot over the past 20 years. From operationalizing hardening processes to responding to new regulations, hosts Anton Chuvakin and Tim Peacock get into the details with Andrew Hoying, senior security engineering manager at Google. Listen here.What is Chronicle? Beyond XDR and into the next generation of SecOps: Chronicle’s got a good story to tell, and Chris Corde, Google Cloud’s senior director of product management for security operations, discusses “why Chronicle” with Anton and Tim. Listen here.Threat Trends: Unraveling WyrmSpy and DragonEgg with Lookout: Host Luke McNamara is joined by Kristina Balaam, staff threat researcher at Lookout, to discuss her work attributing two new mobile malware families to APT41. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in two weeks with more security-related updates from Google Cloud.
Read More for the details.
We are excited to announce that Unreal Engine (UE) game developers can now more quickly access and integrate with Amazon GameLift with a new standalone plugin for UE. Amazon GameLift is a fully managed service that allows developers to quickly manage and scale dedicated game servers for multiplayer games. With this release, Amazon GameLift Plugin for Unreal Engine supports UE5, UE5.1, and UE5.2 for Windows and Mac OS.
Read More for the details.
Amazon Simple Notification Service (Amazon SNS) now supports AWS CloudTrail logging for the Publish and PublishBatch API actions. By logging these data events, you can get details on when and who made API calls to Amazon SNS, thereby enhancing data visibility for security and operations teams, enabling governance, compliance, and operational auditing.
Read More for the details.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7g, M7g and R7g instances are now available in AWS Region Europe (Spain). These instances are powered by AWS Graviton3 processors and built on the AWS Nitro System. AWS Graviton3 processors provide up to 25% better compute performance compared to AWS Graviton2 processors. The AWS Nitro System is a collection of AWS designed hardware and software innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.
Read More for the details.
AWS Compute Optimizer now supports 153 additional Amazon Elastic Compute Cloud (Amazon EC2) instance types. The newly supported instance types include the latest generation general-purpose instances (M7g, M7i, M7i-flex, M7a, M6a), compute optimized instances (C7gn, C7g), memory optimized instances (R7g, R7iz, R6id, R6a, X2iezn), storage optimized instances (I4g, I4i), and high-performance-computing (HPC) optimized instances (Hpc7g, Hpc6id). This expands the total EC2 instance types supported by Compute Optimizer to 636. Additionally, Compute Optimizer now delivers Elastic Block Store (EBS) volume recommendations for EBS volumes that are attached to multiple EC2 instances simultaneously.
Read More for the details.
Today, we are announcing the availability of AWS Backup support for Amazon FSx for NetApp ONTAP, Windows File Server, and Lustre in the Asia Pacific (Hyderabad, Jakarta, Melbourne), Europe (Spain, Zurich), Israel (Tel Aviv) and Middle East (UAE) Regions. AWS Backup is a policy-based, fully managed and cost-effective solution that enables you to centralize and automate data protection of Amazon FSx along with other AWS services (spanning compute, storage, and databases) and third-party applications. Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity.
Read More for the details.
Today, we are announcing the developer preview of the AWS Amplify JavaScript Library v6 which includes reduced bundle sizes, richer TypeScript support, and integrations with Next.js server-side features. The AWS Amplify JavaScript Library enables frontend developers to connect their web and React Native apps to AWS cloud backends. In this developer preview, Amplify JavaScript now offers richer TypeScript support for the Auth, Analytics, and Storage categories. Apps using this developer preview will be served with smaller bundle sizes. Amplify JavaScript v6 also introduces an integration with Next.js server-side features such as App Router, Middleware, API routes, and server functions.
Read More for the details.
Starting today, you can use AWS Application Migration Service (AWS MGN) to prepare your environment for the migration process using the MGN connector directly from the AWS Application Migration Service console.
Read More for the details.
AWS App Runner now supports deploying services from source code repositories that follow a monorepo structure. App Runner makes it easier for developers to quickly deploy containerized web applications and APIs to the cloud, at scale, and without managing infrastructure. With App Runner build-from-source capability you can offload the build and deployment workflow management to App Runner and deploy services directly from source code. App Runner provides convenient platform-specific managed runtimes. Each one of these runtimes builds a container image from your source code, and adds language runtime dependencies into your application container image.
Read More for the details.