Azure – Generally available: Hierarchical forecasting for Azure Machine Learning
Hierarchical forecasting, now generally available, offers you the capability to produce consistent forecasts for all levels of your data.
Read More for the details.
Hierarchical forecasting, now generally available, offers you the capability to produce consistent forecasts for all levels of your data.
Read More for the details.
General availability enhancements and updates released for Azure SQL.
Read More for the details.
With Azure Load Testing, you can now run load tests against private endpoints – endpoints deployed in a virtual network, public endpoints with access restrictions (restricting client IP addresses, etc.)
Read More for the details.
User-defined routes (UDRs) support for private endpoints is now generally available.
Read More for the details.
Network security groups (NSGs) support for private endpoints is now generally available.
Read More for the details.
AWS Trusted Advisor Priority is now generally available for AWS Enterprise Support customers, helping IT leaders focus on key cloud optimization opportunities through curated recommendations prioritized by their AWS account teams. AWS Trusted Advisor provides recommendations that help you follow AWS best practices across cost optimization, performance, security, reliability, and service quotas.
Read More for the details.
In this article we’ll explore the networking components of Google Kubernetes Engine (GKE) and the various options that exist. Kubernetes is an open source platform for managing containerized workloads and services and GKE is a fully managed environment for running Kubernetes on Google Cloud infrastructure.
Various network components in Kubernetes utilize IP addresses and ports to communicate. IP addresses are unique addresses that identify various components in the network.
Components
Containers – These are the smallest components for executing application processes. One or more containers run in a pod.
Pods – A collection of containers that are physically grouped together. Pods are assigned to nodes.
Nodes – Nodes are worker machines in a cluster (a collection of nodes). A node runs zero or more pods.
Services
ClusterIP – These addresses are assigned to a service.
Load balancer – Load balances internal traffic or external traffic to nodes in the cluster.
Ingress – Special type of Load balancer that handles HTTP(S) traffic.
IP addresses are assigned from various subnets to the components and services. Variable length subnet masks (VLSM) are used to create CIDR blocks. The amount of available hosts on a subnet depends on the subnet mask used.
The formula for calculating available hosts in Google Cloud is 2n- 4, not 2n- 2, which is normally used in on-premise networks.
The flow of IP address assignment looks like this:
Nodes are assigned IP addresses from the cluster’s VPC network
Internal Load balancer IP addresses by default are automatically assigned from the Node IPv4 block. If necessary, you can create a specified range for your Load balancers and use the loadBalancerIP option to specify the address from that range.
Pods are assigned addresses from a range of addresses issued to pods running on that node. The default max pods per node is 110. To allocate an address to this number the amount is multiplied by 2 (110*2=220) and the nearest subnet is used which is /24. This allows a buffer for scheduling of the pods. This limit is customizable at creation time.
Containers share the IP address of the Pods they run on.
Service (Cluster IP) addresses are assigned from an address pool reserved for services.
The IP address ranges for VPC-native clusters section of the VPC-native clusters document gives you an example of planning and scoping address ranges.
DNS allows name to IP address resolution. This allows automatic name entries to be created for services. There are a few options in GKE.
kube-dns – Kubernetes native add-on service. Kube-dns runs on a deployment that is exposed via a cluster IP. By default pods in a cluster use this service for DNS queries. The “Using kube-dns” document describes how it works.
Cloud DNS – This is Google Cloud DNS managed service. This can be used to manage your cluster DNS. A few benefits of Cloud DNS over kube-dns are:
Reduces the management of a cluster-hosted DNS server.
Supports local resolution of DNS on GKE nodes. This is done by caching responses locally, which provides both speed and scalability.
Integrates with GoogleCloud Operations monitoring suite.
Service Directory is another service from Google Cloud that can be integrated with GKE and Cloud DNS to manage services via namespaces.
The gke-networking-recipes github repo has some Service Directory examples you can try out for Internal LoadBalancers, ClusterIP, Headless & NodePort.
For a deeper understanding of DNS options in GKE please check out the article DNS on GKE: Everything you need to know.
These control access and distribute traffic across clutter resources. Some options in GKE are:
These handle HTTP(S) traffic destined to services in your cluster. They use an Ingress resource type. When this is used it creates an HTTP(S) load balancer for GKE. When configuring, you can assign a static IP address to the load balancer, to ensure that the address remains the same.
In GKE there you can provision both external and internal Ingress. The links to the guides below show you how to configure:
Configuring ingress for internal HTTP(S) load balancing
Configuring ingress for external load balancing
GKE allows you to take advantage of container-native load balancing which directs traffic directly to the pod IP using Network Endpoint Groups (NEGs).
There are three main points to understand in this topic:
Frontend-This exposes your service to clients through a frontend that accepts the traffic based on various rules. This could be a DNS name or Static IP address.
Load balancing-Once the traffic is allowed the load balancer distributes to available resources to serve the request based on rules.
Backend-Various endpoints that can be used in GKE.
In GKE you have several ways you can design your clusters networking:
Standard – This mode allows the admin the ability to configure the clusters underlying infrastructure. This mode is beneficial if you need a deeper level of control and responsibility.
Autopilot – GKE provisions and manages the cluster’s underlying infrastructure. This is pre-configured for usage and gives you a bit of hand-off management freedom.
Private Cluster (This allows only internal IP connections). If you need a client to have access to the internet (e.g. for updates) you can use a Cloud NAT.
Private Service Access, (Lets your VPC communicate with service producer services via private IP addresses. Private Service Connect, (Allows private consumption of services across VPC networks)
Below is a short high-level recap.
IP addresses are assigned to various resource in your cluster
Nodes
Pods
Containers
Services
These IP address ranges are reserved for the various resource types. You have the ability to adjust the range size to meet your requirements by subnetting. Restricting unnecessary external access to your cluster is recommended.
By default pods have the ability to communicate across the cluster.
To expose applications running on pods you need a service.
Cluster IPs are assigned to services.
For DNS resolution you can rely on the native option like kube-dns or you can utilize Google Cloud DNS within your GKE cluster.
Load balancers can be used internally and external with your cluster to expose applications and distribute traffic.
Ingress handles HTTP(S) traffic. This utilizes HTTP(S) load balancers service from Google cloud. Ingress can be used for internal and external configurations.
To learn more about GKE networking, check out the following:
Documentation: IP address management strategies when migrating to GKE
Documentation: Best practices for GKE networking
Blog: DNS on GKE: Everything you need to know
YouTube: GKE Concepts of Networking
Want to ask a question, find out more or share a thought? Please connect with me on Linkedin or Twitter: @ammettw.
Read More for the details.
Amazon Redshift Query Editor v2 , a free web-based tool for data exploration and analysis using SQL, is now enhanced with additional ease of use and security capabilities. Amazon Redshift Query Editor v2 simplifies the process for admins and end-users to connect Amazon Redshift clusters using their Identity Provider(IdP). As an administrator, you now can integrate your Identity Provider(IdP) with Amazon AWS console to access the Query Editor v2 as a federated user. You need to configure your identity provider (IdP) to pass in database user and (optionally) database groups by adding specific principal tags as SAML attributes.
Read More for the details.
You can now use AWS Resilience Hub with Elastic Load Balancing (ELB) and Amazon Route 53 Application Recovery Controller readiness checks to help meet your application’s recovery objectives. Resilience Hub provides you with a single place to define, validate, and track the resilience of your applications so that you can avoid unnecessary downtime caused by software, infrastructure, or operational disruptions.
Read More for the details.
Starting today, customers of AWS Cost Anomaly Detection will see a new interface in the console, where they view and analyze anomalies and their root causes. AWS Cost Anomaly Detection monitors customers’ spending patterns to detect and alert on anomalous (increased) spend, and to provide root cause analyses.
Read More for the details.
Amazon Rekognition Custom Labels is an automated machine learning (AutoML) service that allows customers to build custom computer vision models to classify and identify objects in images that are specific and unique to their business. Custom Labels does not require customers to have any prior computer vision expertise or knowledge.
Read More for the details.
We are excited to announce the availability of Amazon EC2 P4d instances in the AWS GovCloud (US) Region. P4d instances are optimized for applications in Machine Learning (ML) and High Performance Computing (HPC).
Read More for the details.
Diagnosing and treating chronic pain can be complex, difficult, and full of uncertainties for a patient and their treating physician. Depending on the condition of the patient and the knowledge of the physician, making the correct diagnosis takes time, and experimenting with different treatments might be required.
This trial-and-error process can leave the patient in a world of pain and confusion until the best remedies can be prescribed. It’s a situation similar to the daily struggle that many of today’s security operations teams face.
Screaming from the mountain tops “just patch it!” isn’t very helpful when security teams aren’t sure if applying a patch might create even worse issues like crashes, incompatibility, or downtime. Like a patient with chronic pain, they may not know the source of the pain in their system. Determining which vulnerabilities to prioritize patching, and ensuring those fixes actually leave you with a more secure system, is one of the hardest tasks a security team can face. This is where a Vulnerability Exploitability eXchange (VEX) comes in.
In previous blogs, we’ve discussed how establishing visibility and awareness into patient safety and technology is vital to creating a resilient healthcare system. We’ve also looked at how combining software bills of materials (SBOM) with Google’s Supply chain Levels for Software Artifacts (SLSA) framework can help build more secure technology that enables resilience.
The SBOM provides visibility into the software you’re using and where it comes from, while SLSA provides guidelines that help increase the integrity and security of software you then build. Rapid diagnostic assessments can be added to that equation with VEX, which the National Telecommunications and Information Administration describes as a “companion” document that lives side-by-side with SBOM.
To go back to our medical metaphor, VEX is a mechanism for software providers to tell security teams where to look for the source of the pain. VEX data can help with software audits when inventory and vulnerability data need to be captured at a specific point in time. That data also can be embedded into automated security tools to make it easier to prioritize vulnerability patching.
You can then think of SBOM as the prescription label on a bottle of medication, SLSA as the child-proof lid and tamper-proof seal guaranteeing the safety of the medication, and VEX as the bottle’s safety warnings. As a diagnostic aide, a VEX can help security teams make accurate diagnoses of “what could hurt” and system weaknesses before the bad guys do.
Yet making an accurate assessment of that threat model can be challenging, especially when looking at the software we use to run systems. The ability to quickly and accurately evaluate an organizations’ weaknesses and pain points can be vital to hastening response to a vulnerability and stopping cyberattacks before they become destructive. We believe that VEX is an important part of the equation to help secure the software supply chain.
As an example, look no further than the Apache Log4j vulnerabilities revealed in December 2021. Global industries including healthcare were dealt another blow when Apache’s Log4j 2 logging system was found to be so vulnerable that relatively unsophisticated threat actors could quickly infiltrate and take over systems. Through research conducted by Google and information contributed by CISA, we learned of examples of where vulnerabilities in Log4j 2, a single software component, could potentially impact thousands of companies using software that depend on it because of its near-ubiquitous use.
While a VEX would not capture zero-day vulnerabilities, it would be able to inform security teams of other known vulnerabilities in Log4j 2. Once vulnerabilities have been published, security teams could use SBOM to find them, and use VEX to understand if remediation is a priority or not.
A key reason we focus on visibility mechanisms like SBOM and SLSA is because they give us the ability to understand our risks. Without the ability to see into what we must protect, it can be difficult to determine how to quickly reduce risk.
Visibility is a crucial first step to stopping malicious hackers. Yet without context, visibility leaves security teams overwhelmed with data. Why? Well, where would you start when trying to mitigate the 30,000 known vulnerabilities affecting just open source software, according to the Open Source Vulnerabilities database (OSV)? NIST’s National Vulnerability Database (NVD) is tracking close to 181,000 vulnerabilities. We’ll be patching into the next millennium if we adopt a “patch everything” approach.
It’s impossible to address every vulnerability individually. To make progress, security teams need to be able to prioritize findings and go after the ones that will have the greatest impact first. The goal of a VEX artifact is to make prioritization a little easier.
While SBOMs are created or changed when the material included in a build is updated, VEXs are intended to be changed and distributed when a new vulnerability or threat has changed. This means that VEX and SBOM should be maintained separately. Since security researchers and organizations are constantly discovering new cybersecurity vulnerabilities and threats, a more dynamic mechanism like VEX can help ensure builders and operators have the ability to quickly ascertain the risks of the software they are using.
Let’s dig into this VEX example from CycloneDX. You can see the list of vulnerabilities found, third parties who track and report those vulnerabilities, vulnerability ratings per CVSS, and most importantly, a statement from the developer that guides the operator reading the VEX to those vulnerabilities that are exploitable and need to be protected. At the bottom, you’ll see the VEX “affects” an SBOM.
This information allows the user of the VEX document to refer to its companion SBOM. By necessity, the VEX is intentionally decoupled from the SBOM because they need to be updated at different times. A VEX document will need to be updated when new vulnerabilities emerge. An SBOM will need to be updated when changes to the software are made by a manufacturer. Although they can and need to be updated separately, the contents of each document can stay aligned because they are linked.
VEX could dramatically improve how security vulnerabilities are handled. It’s not uncommon to find operators buried in vulnerabilities, best-guessing the ones that need fixing, and trying to make sense of tens (and sometimes hundreds) of pages of documentation to determine the best, lowest impact fix.
With SBOM+SLSA+VEX, operators are using software-driven mechanisms to conduct analyses and evaluate risk instead of relying on intuition and best guesses. The tripartite SBOM+SLSA+VEX approach provides an up-to-date list of issues and perspective on what needs attention. This is a transformative development in security—enabling teams to get a better handle on doing vulnerability mitigation, starting where it could hurt the most.
Driven by repeated cyberattacks on critical infrastructure such as healthcare, government regulators have taken a more interested stance in software security and supply chains. Strengthening the effectiveness of SBOMs in the United States is a big part of the newly
proposed Protecting and Transforming Cyber Health Care (PATCH) Act. The law would require medical device manufacturers adhere to minimum cybersecurity standards in their products, including the creation of SBOMs for their devices, and plans to monitor and patch any cybersecurity vulnerabilities that are discovered during the device’s lifetime.
Meanwhile, new draft medical device cybersecurity guidance from the FDA continues that agency’s involvement in aggressively encouraging medical device manufacturers to improve the cybersecurity resilience of their products. The White House spoke for SBOMs, as well. An Executive Order from May 2021 lays out requirements for secure software development, including the production and distribution of SBOM for software used by the federal government.
Regardless of how these initiatives pan out, Google believes controls like those provided by SBOM+SLSA+VEX are critical to protect software and build a resilient healthcare ecosystem. This approach provides detailed, critical risk exposure data to security teams so they can take necessary steps to reduce immediate and long-term risks.
At Google, we are working with the Open Source Security Foundation on supporting SBOM development. Our Know, Prevent, Fix report on secure software development creates a broader outline of how Google thinks about securing open source software from preventable vulnerabilities. You can read more about these efforts for securing workloads on Google Cloud from our Cloud Architecture Center. Take a look at Cloud Build, a Google Cloud service that can be used to generate up to SLSA Level 2 build artifacts.
Customers often have difficulty getting full visibility and control over vulnerabilities because of their dependence on open source software (OSS). Assured Open Source Software (Assured OSS) is the Google Cloud service that helps teams both secure the external OSS packages they use and overcome avoidable vulnerabilities by simply eliminating them from the code base. Finally, ask us about Google’s Cybersecurity Action Team, the world’s premier security advisory team and its singular mission supporting the security and digital transformation of governments, critical infrastructure, enterprises, and small businesses.
If you’re a software supplier, please consider our suggestions above. Whether you are or not, you should begin:
Contractually mandating SBOM+VEX+SLSA (or their equivalent) artifacts to be provided for all new software.
Train procurement teams to ask for and use SBOM+VEX+SLSA to make purchasing decisions. There should be no reason an organization procures software or hardware with known, preventable issues. Even if they do, the information these mechanisms provide should help security teams decide if they can live with the risks before equipment enters their networks.
Establishing a governance program that ensures those who control procurement decisions are aware of and owning the risks associated with software they are buying.
Enabling security teams to build pipelines to ingest SBOM+VEX+SLSA artifacts into their security operations and use it to strategically advise and drive mitigation activities.
At Google, we believe the path to resilience begins with building visibility and structural awareness into the software, hardware, and equipment it rides on as a critical first step. Time will tell if VEX becomes widely adopted, but the point behind it won’t change—we can’t know how we are vulnerable without visibility. VEX is an important concept in this regard.
Next month, we’ll be shifting gears slightly to focus on building resilience by establishing a security culture that obsesses over its patients and products.
Read More for the details.
A critical component of any security operations team’s job is to deliver high-fidelity detections of potential threats across the breadth of adversary tactics. But increasingly sophisticated threat actors, an expanding attack surface, and an ever-present cybersecurity talent shortage make this task more challenging than ever.
Google keeps more people safe online than anyone else. Individuals, businesses and governments globally depend on our products that are secure-by-design and secure-by-default. Part of the “magic” behind Google’s security is the sheer scale of threat intelligence we are able to derive from our billions of users, browsers, and devices.
Today, we are putting the power of Google’s intelligence in the hands of security operations teams. We are thrilled to announce the general availability of curated detections as part of our Chronicle SecOps Suite. These detections are built by our Google Cloud Threat Intelligence (GCTI) team, and are actively maintained to reduce manual toil in your team.
Our detections provide security teams with high quality, actionable, out-of-the-box threat detection content curated, built and maintained by Google Cloud Threat Intelligence (GCTI) researchers. Our scale, and depth of intelligence, gained by securing billions of users everyday, gives us a unique vantage point to craft effective and targeted detections. These native detection sets cover a wide variety of threats for cloud and beyond, including Windows-based attacks like ransomware, remote-access tools (RAT), infostealers, data exfiltration, suspicious activity, and weakened configurations.
With this launch, security teams can smoothly leverage Google’s expertise and unique visibility into the threat landscape. This release helps understaffed and overstressed security teams keep up with an ever evolving threat landscape, quickly identify threats, and drive effective investigation and response. With this new release, security teams can:
Enable high quality curated detections with a single click from within the Chronicle console.
Operationalize data with high-fidelity threat detections, stitched with context available from authoritative sources (such as IAM and CMDB).
Accelerate investigation and response by finding anomalistic assets and domains with prevalence visualization for the detections triggered.
Map detection coverage to the MITRE ATT&CK framework to better understand adversary tactics and techniques and uncover potential gaps in defenses.
Detections are constantly updated and refined by GCTI researchers based on the evolving threat landscape. The first release of curated detections includes two categories that cover a broad range of threats, including:
Windows-based threats: Coverage for several classes of threats including infostealers, ransomware, RATs, misused software, and crypto activity.
Cloud attacks and cloud misconfigurations: Secure cloud workloads with additional coverage around exfiltration of data, suspicious behavior, and additional vectors.
Let’s look at an example of how you can put curated detections to work within the Chronicle dashboard, monitor coverage, and map to MITRE ATT&CK®.
An analyst can learn more details around specific detections and understand how they map to the MITRE ATT&CK framework. There are customized settings to configure deployment and alerting, and specify exceptions via reference lists.
You can see each rule which has generated a detection against your log data in the Chronicle rules dashboard. You can observe detections associated with the rule and pivot to investigative views. For example, here is the detection view from the timeline of an Empire Powershell Stager launch triggered by the Windows RAT rule set. You can also easily pivot to associated information and investigate the asset on which it was triggered.
By surfacing impactful, high-efficacy detections, Chronicle can enable analysts to spend time responding to actual threats and reduce alert fatigue. Our customers who used curated detections during our public preview were able to detect malicious activity and take actions to prevent threats earlier in their lifecycle. And there’s more to come. We will be delivering a steady release of new detection categories covering a wide variety of threats, community-driven content, and other out-of-the-box analytics.
Ready to put Google’s intelligence to work in your Security Operations Center? Contact Google Cloud sales or your customer success CSM team. You can also learn more about all these new capabilities in Google Chronicle in our product documentation.
Thank you to Mike Hom (Product Architect, Chronicle) and Ben Walter (Engineering Manager, Google Cloud Threat Intelligence), who helped with this launch.
Read More for the details.
SageMaker Pipelines is a tool that helps you build machine learning pipelines that take advantage of direct SageMaker integration. SageMaker Pipelines now supports creating and testing pipelines in your local machine (e.g. your computer). With this launch, you can test your Sagemaker Pipelines scripts and parameters compatibility locally before running them in on SageMaker in the cloud. Sagemaker Pipelines Local Mode supports the following steps: processing, training, transform, model, condition, and fail. These steps give you the flexibility to define various entities in your machine learning workflow. Using Pipelines local mode, you can quickly and efficiently debug errors in the scripts and pipeline definition. You can seamlessly switch your workflows from local mode to Sagemaker’s managed environment by updating the session.
Read More for the details.
Starting today, Amazon EC2 High Memory instances with 12TB of memory (u-12tb1.112xlarge) are available in the US East (Ohio) region. Additionally, high memory instances with 6TB of memory (u-6tb1.56xlarge, u-6tb1.112xlarge) are now available in the South America (Sao Paulo) region and instances with 3TB of memory (u-3tb1.56xlarge) are now available in the South America (Sao Paulo) and Asia Pacific (Sydney) regions.
Read More for the details.
The mission of the Google Cloud blog is to be your definitive, trusted source for news, guidance, and perspectives from Google on cloud computing. This site unifies existing blogs covering a variety of Google Cloud products and services, including Google Cloud Platform, Workspace, Google Maps Platform, and Chrome Enterprise. We hope that by bringing these voices together under one roof, we’ll inspire more of you to find unique solutions to your business goals through cloud-based tools.
The Cloud blog is our daily dialogue with developers, business users and enterprise customers. Whether you’re building the next generation of apps, discovering insights from data, or collaborating with teams across the globe, we want to help you at every point in your journey to the cloud.
Read More for the details.
Private endpoint user-defined routes (UDR) support is now generally available in most public regions.
Read More for the details.
You can now use an Amazon Interactive Video Service (Amazon IVS) basic channel for HD (720p) and Full HD (1080p) quality streams, in addition to SD (480p). The expanded functionality is designed to enable streamers and viewers globally to enjoy higher video quality when using a basic channel. A basic channel will deliver only the original input video quality to viewers, whereas a standard channel provides multiple qualities of output, allowing better playback quality across a range of devices and network conditions.
Read More for the details.
Amazon Managed Streaming for Apache Kafka Serverless (Amazon MSK Serverless) is now integrated with AWS CloudFormation and HashiCorp Terraform, which allows customers to describe and provision Amazon MSK Serverless clusters using code. These services make it easy to provision and configure Amazon MSK Serverless clusters in a repeatable, automated, and secure manner.
Read More for the details.