Today, we are announcing the availability of AWS Backup support for Amazon Timestream for LiveAnalytics in the Asia Pacific (Mumbai) Region. AWS Backup is a policy-based, fully managed and cost-effective solution that enables you to centralize and automate data protection of Amazon Timestream for LiveAnalytics along with other AWS services (spanning compute, storage, and databases) and third-party applications. Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity.
With this launch, AWS Backup support for Amazon Timestream for LiveAnalytics is available in the following Regions: US East (N. Virginia, Ohio, Oregon), Asia Pacific (Sydney, Tokyo), and Europe (Frankfurt, Ireland). For more information on regional availability, feature availability, and pricing, see the AWS Backup pricing page and the AWS Backup Feature Availability page.
AWS Database Migration Service (AWS DMS) now supports Data Masking, enabling customers to transform sensitive data at the column level during migration, helping to comply with data protection regulations like GDPR. Using AWS DMS, you can now create copies of data that redacts information at a column level that you need to protect.
AWS Data Masking will automatically mask the portions of data you specify. Data Masking offers three transformation techniques: digit randomization, digit masking, and hashing. It’s available for all endpoints supported by DMS Classic and DMS Serverless in version 3.5.4.
AWS Marketplace now provides AI-powered product summaries and comparisons for popular software as a service (SaaS) products, helping you make faster and more informed software purchasing decisions. Use this feature to compare similar SaaS products across key evaluation criteria such as customer reviews, product popularity, features, and security credentials. Additionally, you can gain AI-summarized insights into key decision factors like ease of use, customer support, and cost effectiveness.
Sifting through thousands of options on the web to find software products that best fit your business needs can be challenging and time-consuming. The new product comparisons feature in AWS Marketplace helps with simplifying this process for you. It leverages machine learning to recommend similar SaaS products for consideration. It then uses generative AI to summarize product information and customer reviews, highlight unique aspects of products, and helps you understand key differences to identify the best product for your use cases. You can also customize the comparison sets and download comparisons tables to share with colleagues.
The product comparisons feature is available for popular SaaS products in all commercial AWS Regions where AWS Marketplace is available.
When you use AWS Database Migration Service (DMS) and DMS Schema Conversion to migrate a database, you might need to convert the embedded SQL in your application to be compatible with your target database. Rather than converting it manually, you can use Amazon Q Developer in the IDE to automate the conversion.
Amazon Q Developer uses metadata from a DMS Schema Conversion to convert embedded SQL in your application to a version that is compatible with your target database. Amazon Q Developer will detect Oracle SQL statements in your application and convert them to PostgreSQL. You can review and accept the proposed changes, view a summary of the transformation, and follow the recommended next steps in the summary to verify and test the transformed code.
This capability is available within the Visual Studio Code and IntelliJ IDEs.
Today, AWS Control Tower added AWS Backup to the list of AWS services you can optionally configure with prescriptive guidance. This configuration option allows you to select from a range of recommended backup plans, seamlessly integrating data backup and recovery workflows into your Control Tower landing zone and organizational units. A landing zone is a well-architected, multi-account AWS environment based on security and compliance best practices. AWS Control Tower automates the setup of a new landing zone using best-practices blueprints for identity, federated access, logging, account structure, and with this launch adds data retention.
When you choose to enable AWS Backup on your landing zone, and then select applicable organizational units, Control Tower creates a backup plan with predefined rules, like retention days, frequency, and time window during which backups occur, that define how to backup AWS resources across all governed member accounts. Applying the backup plan at the Control Tower landing zone ensures it is consistent for all member accounts in-line with best practice recommendations from AWS Backup.
Today, AWS announces the general availability (GA) of Data Exports for FOCUS 1.0, which has been in public preview since June 2024. FOCUS 1.0 is an open-source cloud cost and usage specification that provides standardization to simplify cloud financial management across multiple sources. Data Exports for FOCUS 1.0 enables customers to export their AWS cost and usage data with the FOCUS 1.0 schema to Amazon S3. The GA release of FOCUS 1.0 is a new table in Data Exports in which key specification conformance gaps have been solved compared to the preview table.
With Data Exports for FOCUS 1.0 (GA), customers receive their costs in four standardized columns, ListCost, ContractedCost, BilledCost, and EffectiveCost. It provides a consistent treatment of discounts and amortization of Savings Plans and Reserved Instances. The standardized schema of FOCUS ensures data can be reliably referenced across sources.
Data Exports for FOCUS 1.0 (GA) is available in the US East (N. Virginia) Region, but includes cost and usage data covering all AWS Regions, except AWS GovCloud (US) Regions and AWS China (Beijing and Ningxia) Regions.
Learn more about Data Exports for FOCUS 1.0 in the User Guide, product details page, and at the FOCUS project webpage. Get started by visiting the Data Exports page in the AWS Billing and Cost Management console and creating an export of the new GA table named “FOCUS 1.0 with AWS columns”. After creating a FOCUS 1.0 GA export, you will no longer need your preview export. You can view the specification conformance of the GA release here.
Today, AWS announces the public preview of the enhanced AWS Pricing Calculator that provides accurate cost estimates for new workloads or modifications to your existing AWS usage by incorporating eligible discounts. It also helps you estimate the cost impact of your commitment purchases and their impact to your organization’s consolidated bill. With today’s launch, AWS Pricing Calculator now allows you to apply eligible discounts to your cost estimates, enabling you to make informed financial planning decisions.
The enhanced Pricing Calculator, available within the AWS Billing and Cost Management Console, provides two types of cost estimates: cost estimation for a workload, and estimation of a full AWS bill. Using the enhanced Pricing Calculator, you can import your historical usage or create net new usage when creating a cost estimate. You can also get started by importing existing Pricing Calculator estimates, and sharing an estimate with other AWS console users. Using the enhanced Pricing Calculator, you can confidently assess the cost impact and understand your return on investment for migrating workloads, planning new workloads or growth of existing workloads. You can plan for commitment purchases on the AWS cloud. You can also create or access cost estimates using a new public cost estimations API.
The enhanced Pricing Calculator is available in all AWS commercial regions, excluding China. To get started with new Pricing Calculator, visit the AWS Billing and Cost Management Console. To learn more visit the AWS Pricing Calculator user guide and blog.
Welcome to the second Cloud CISO Perspectives for November 2024. Today, Monica Shokrai, head of business risk and insurance, Google Cloud, and Kimberly Goody, cybercrime analysis lead, Google Threat Intelligence Group, explore the role cyber-insurance can play in combating the scourge of ransomware.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
–Phil Venables, VP, TI Security & CISO, Google Cloud
aside_block
<ListValue: [StructValue([(‘title’, ‘Get vital board insights with Google Cloud’), (‘body’, <wagtail.rich_text.RichText object at 0x3e5991a78c70>), (‘btn_text’, ‘Visit the hub’), (‘href’, ‘https://cloud.google.com/solutions/security/board-of-directors?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY24-Q2-global-PROD941-physicalevent-er-CEG_Boardroom_Summit&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
Ending the ransomware scourge starts with reporting, not blocking cyber-insurance
By Monica Shokrai, head of business risk and insurance, Google Cloud, and Kimberly Goody, cybercrime analysis lead, Google Threat Intelligence Group
Ransomware is wreaking havoc around the world, underscoring the need for better collective defensive action from public and private sector organizations.
Globally, ransomware continues to be a complicated and pernicious threat, according to our M-Trends 2024 report. It accounts for more than 20 percent of cyberattacks, year after year. Ransomware at one U.S. health insurance organization forced the shut down of operations at hospitals and pharmacies for several weeks earlier this year, a move that cost the company an estimated $872 million so far.
Monica Shokrai, head of business risk and insurance, Google Cloud
The numbers paint a dire picture of the security impact of operating legacy systems:
71% said that legacy technology has left organizations less prepared for the future.
63% believe that their organization’s technology landscape is less secure than it was in the past.
More than 66% told us that their organizations are investing more time and money than ever in securing their environments — but still experience costly security incidents.
81% of organizations experience at least one security incident per year.
Organizations experience eight security incidents on average per year.
We know many security leaders have convinced the business to invest in more security tools, because the survey also found that 61% of organizations are using more security tools than they did two years ago. Yet while more than two-thirds of organizations are investing more time and money in securing their environments, many are still experiencing expensive security incidents.
Kimberly Goody, cybercrime analysis lead, Google Threat Intelligence Group
Victims of these attacks are often left with the difficult decision to pay a ransom. At least $3.1 billion has been paid in ransom for more than 4,900 ransomware attacks since 2021, wrote Anne Neuberger, U.S. deputy national security adviser for cyber and emerging technology, in October — and these are only the attacks that we know of because they’ve been reported.
Law enforcement and impacted organizations have stepped up their fight against ransomware this year. Some of them have developed a multifaceted approach that combines strategic interventions, technological defenses, and law enforcement efforts to combat it, and so far that’s proven helpful. These efforts led to 14 disruptions by law enforcement in ransomware operations as of September.
Despite these actions, attacks continue. Defending against ransomware is so complicated that even some independent cybersecurity researchers, who had been calling for bans on insurance payments to organizations suffering from ransomware attacks, have backed down from their hard-line positions.
While solutions to the threat are complex, cyber-insurance can play a key role. Cyber-insurers can help reduce attackers’ financial gains from incidents, first and most importantly by requiring a minimum level of security standards to strengthen an organization’s defenses before approving an insurance policy.
Insurers have also been shown to reduce attackers’ financial gains by limiting or avoiding ransom payments altogether and advising on best practices, particularly regarding backups. If a ransomware attacker demands a $2 million bounty to restore data, but cyber-insurance can embolden an organization under attack to more confidently assert their counter-demand for a reduced payment, that can help the attacked organization strengthen its position and even pay a lower sum — or none at all.
Cowbell Cyber, a cyber-insurance firm, recently found that ’businesses using Google Cloud report a 28% lower frequency of cyber incidents relative to other cloud users.’
However, some believe that cyber-insurance encourages ransomware payments, and would prefer cyber-insurance coverage for ransomware to be banned. Outright bans on cyber-insurance coverage for ransomware payments are likely to harm small businesses more than large ones. Larger businesses are often better positioned to absorb the financial cost of ransomware payments on their own. Conversely, a ban would hurt smaller businesses in outsized ways.
If the ultimate goal of banning insurers from reimbursing ransomware payments is to reduce the profitability of ransomware attacks, then actions that require victims to report payments have the potential to be more impactful. Mandatory reporting could improve law enforcement tracking efforts and introduce more opportunities to recover funds even after payment is sent.
If larger companies continue to pay the ransom despite insurance not covering it, the impact of a ban on the insurance coverage becomes less meaningful. However, a more effective approach may be to incentivize the adoption of policies that improve the digital resilience of private and public-sector organizations to drive down the risks they face. As Phil and Andy wrote in the previous edition of this newsletter, this often means updating legacy IT.
One approach is to incentivize the adoption of secure by design and secure by default technologies, such as those that we develop at Google Cloud. Cowbell Cyber, a cyber-insurance firm, recently found that “businesses using Google Cloud report a 28% lower frequency of cyber incidents relative to other cloud users.” The report also found that Google Cloud exhibited the lowest severity of cyber incidents compared to other cloud service providers.
At-Bay, another cyber-insurance firm, found customers using Google Workspace experienced, on average, 54% fewer email security incidents.
There is an opportunity with AI, as well, to better scale existing anti-ransomware efforts to meet the needs of defenders. We’ve already begun to see AI have a positive impact by helping organizations grow their threat detection efforts and more efficiently address vulnerabilities before attackers can exploit them.
In your fight against ransomware, Google Cloud is here to help you every step of the way. From technology solutions and Mandiant Consulting Services, to threat intelligence insight, we can help you prepare for, protect against, and respond to ransomware attacks. You can learn more about the latest ransomware protection and containment strategies in this report.
For more leadership guidance from Google Cloud experts, please see our CISO Insights hub.
aside_block
<ListValue: [StructValue([(‘title’, ‘Join the Google Cloud CISO Community’), (‘body’, <wagtail.rich_text.RichText object at 0x3e5991a785e0>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://rsvp.withgoogle.com/events/ciso-community-interest?utm_source=cgc-blog&utm_medium=blog&utm_campaign=2024-cloud-ciso-newsletter-events-ref&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
Cyber risk top 5: What every board should know: Boards should learn about security and digital transformation to better manage their organizations. Here’s five top risks they need to know. Read more.
Make IAM for GKE easier to use with Workload Identity Federation: Workload Identity Federation for GKE is now even easier to use with deeper IAM integration. Here’s what you need to know. Read more.
Shift-left your cloud compliance auditing with Audit Manager: Our Audit Manager service, which can help streamline the compliance auditing process, is now generally available. Read more.
Learn how to build a secure data platform: A new ebook, Building a Secure Data Platform with Google Cloud, details the tools available to protect your data as you use it to grow your business. Read more.
Bug hunting in Google Cloud’s VPC Service Controls: You can get rewarded for finding vulnerabilities in VPC Service Controls, which helps prevent data exfiltration. Here’s how. Read more.
Finding bugs in Chrome with CodeQL: Learn how to use CodeQL, a static analysis tool, to search for vulnerabilities in Chrome. Read more.
Please visit the Google Cloud blog for more security stories published this month.
aside_block
<ListValue: [StructValue([(‘title’, ‘Fact of the month’), (‘body’, <wagtail.rich_text.RichText object at 0x3e5991a78af0>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://cloud.google.com/blog/topics/threat-intelligence/ransomware-attacks-surge-rely-on-public-legitimate-tools’), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
Threat Intelligence news
Using AI to enhance red team engagements: Mandiant researchers look at several case studies that demonstrate how we can use AI to analyze data from complex adversarial emulation engagements to better defend organizations. Read more.
Empowering Gemini for malware analysis: In our latest advancements in malware analysis, we’re equipping Gemini with new capabilities to address obfuscation techniques and obtain real-time insights on indicators of compromise by integrating the Code Interpreter extension and the Google Threat Intelligence function calling. Read more.
Understanding the digital marketing ecosystem spreading pro-PRC influence operations: GLASSBRIDGE is an umbrella group of four different companies that operate networks of “fake” news sites and newswire services tracked by the Google Threat Intelligence Group. They publish thematically similar, inauthentic content that emphasizes narratives aligned to the political interests of the People’s Republic of China. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Google Cloud Security and Mandiant podcasts
Your top cloud IAM pet peeves (and how to fix them): Google Cloud’s Michele Chubirka, staff cloud security advocate, and Sita Lakshmi Sangameswaran, senior developer relations engineer, join host Anton Chuvakin for a deep dive into the state of Identity Access Management in the cloud, why you might be doing IAM wrong, and how to get it right. Listen here.
Behind the Binary: Motivation, community, and the future with YARA-X: Victor Manuel Alvarez, the creator of YARA, sits down with host Josh Stroschein to talk about how YARA became one of the most powerful tools in cybersecurity, and why we need a ground-up rewrite of this venerable tool. Listen here.
Behind the Binary: A look at the history of incident response, Mandiant, and Flare-On: Nick Harbour joins Josh to discuss his career journey from the Air Force to Mandiant, share insights into the evolution of malware analysis, and the development of the reverse engineering Flare-On contest. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in two weeks with more security-related updates from Google Cloud.
Application Load balancer (ALB) now supports advertise Certificate Authority (CA) subject name stored in its associated Trust Store to simplify the certificate selection experience. By enabling this feature, the ALB will send a list of CA subject names to clients attempting to connect to the load balancer. Clients can use this list to identify which of their certificates will be accepted by the ALB, which reduces connection errors during mutual authentication.
Today, AWS announces additional file support for AWS Application Discovery Service (ADS), which adds the ability to import VMware data generated by 3rd-party datacentre tools. With today’s launch, you can now directly take an export from Dell Technology’s RVTools and load it into ADS without any file manipulation.
ADS provides a system of record for configuration, performance, tags, network connections, and application grouping of your existing on-premises workloads. Now with the support for additional file formats, you have the option to kick off your migration journey using the data you already have. At any time later you have the option to deploy either ADS Discovery Agents or ADS Agentless Collectors and the data will automatically be merged into a unified view of your datacentre.
These new capabilities are available in all AWS Regions where AWS Application Discovery Service is available.
To learn more, please see the user guide for AWS Application Discovery Service. For more information on using the ADS import action via the AWS SDK or CLI, please see the API reference.
Amazon S3 Connector for PyTorch now supports Distributed Checkpoint (DCP), improving the time to write checkpoints to Amazon S3. DCP is a PyTorch feature for saving and loading machine learning (ML) models from multiple training processes in parallel. PyTorch is an open source ML framework used to build and train ML models.
Distributed training jobs often run for several hours or even days, and checkpoints are written frequently to improve fault tolerance. For example, jobs training large foundation models often run for several days and generate checkpoints that are hundreds of gigabytes in size. Using DCP with Amazon S3 Connector for PyTorch helps you reduce the time to write these large checkpoints to Amazon S3, keeping your compute resources utilized, ultimately resulting in lower compute cost.
Amazon S3 Connector for PyTorch is an open source project. To get started, visit the GitHub page.
Today, we are launching two new capabilities to EC2 Auto Scaling (ASG) that improve the responsiveness of Target Tracking scaling policies. Target Tracking now automatically adapts to the unique usage patterns of your individual applications, and can be configured to monitor high-resolution CloudWatch metrics to make more timely scaling decisions. With this release, you can enhance your application performance, and also maintain high utilization for your EC2 resources to save costs.
Scaling based on sub-minute CloudWatch metrics enables customers, with applications that have volatile demand patterns, like client-serving APIs, live streaming services, ecommerce websites, or on-demand data processing, reduce the time to detect and respond to changing demand. In addition, Target Tracking policies now self-tune their responsiveness, using historical usage data to determine the optimal balance between cost and performance for each application that saves customers’ time and effort.
Both these new features are available in select commercial regions, and Target Tracking policies will begin self-tuning once they have completed analyzing your application’s usage patterns. You can use Amazon Management Console, CLI, SDKs, and CloudFormation to update your Target Tracking configurations. Refer EC2 Auto Scaling user guide to learn more.
Amazon OpenSearch Ingestion now allows you to write data into Amazon Security Lake in real-time, allowing you to ingest security data from both AWS and custom sources and uncover valuable insights into potential security issues in near-realtime. Amazon Security Lake centralizes security data from AWS environments, SaaS providers and on- premises into a purpose-built data lake. With this integration, customers can now seamlessly ingest and normalize security data from all popular custom sources before writing it into Amazon Security Lake.
Amazon Security Lake uses the Open Cybersecurity Schema Framework (OCSF) to normalize and combine security data from a broad range of enterprise security data sources in the Apache Parquet format. With this feature, you can now use Amazon OpenSearch Ingestion to ingest and transform security data from popular 3rd party sources like Palo Alto, CrowdStrike, and SentinelOne into OCSF format before writing the data into Security Lake. Once the data is written to Security Lake, it is available in the AWS Glue Data Catalog and AWS Lake Formation tables for the respective source.
This feature is available in all the 15 AWS commercial regions where Amazon OpenSearch Ingestion is currently available: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), Canada (Central), South America (Sao Paulo), and Europe (Stockholm).
Today, AWS announced the general availability of a new console experience in AWS Resource Explorer that centralizes resource insights and properties from AWS services. With this release, you now have a single console experience to use simple keyword-based search for your AWS resources, view relevant resource properties, and confidently take action to organize your resources.
You can now inspect resource properties, resource-level cost with AWS Cost Explorer, AWS Security Hub findings, AWS Config compliance and configuration history, event timelines with AWS CloudTrail, and a relationship graph showing connected resources. You can also take actions on resources directly from the Resource Explorer console, such as manage tags, add resources to applications, and get additional information about a resource with Amazon Q. For example, now you can use Resource Explorer to search for untagged AWS Lambda functions, inspect the properties and tags of a specific function, examine a relationship graph to see what other resources it is connected to, and tag the function accordingly – all from a single console.
Resource Explorer is available at no additional charge, though features such as compliance information and configuration history require use of AWS Config, which is charged separately. These features are available in all AWS Regions where Resource Explorer is generally available. For more information on Resource Explorer, please visit our documentation. To learn more about how to configure Resource Explorer for your organization, view our multi-account search getting started guide.
Amazon QuickSight now offers Highcharts visuals, enabling authors to create custom visualizations using the Highcharts Core library. This new feature extends your visualization capabilities beyond QuickSight’s standard chart offerings, allowing you to create bespoke charts such as sunburst charts, network graphs, 3D charts and many more.
Using declarative JSON syntax , authors can configure charts with greater flexibility and granular customization. You can easily reference QuickSight fields and themes in the JSON using QuickSight expressions. The integrated code editor includes contextual assistance features, providing autocomplete and real-time validation to ensure proper configuration. To maintain security, the Highcharts visual editor prevents the injection of CSS and JavaScript. Refer documentation for supported list of JSON and QuickSight expressions
Highcharts visual is now available in all supported Amazon QuickSight regions – US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West). To learn more about the Highcharts visual and how to leverage its capabilities in your QuickSight dashboards, visit our documentation.
Amazon QuickSight introduces the ability to import visuals from an existing dashboard or analysis into your current analysis where authors have ownership privileges. This feature streamlines dashboard and report creation by allowing you to transfer associated dependencies such as datasets, parameters, calculated fields, filter definitions, and visual properties, including conditional formatting rules.
Authors can boost productivity by importing visuals instead of recreating them, facilitating collaboration across teams. The feature intelligently resolves conflicts, eliminates duplicates, rescopes filter definitions, and adjusts visuals to match the destination sheet type and theme. Imported visuals are forked from the source, ensuring independent customization. To learn more, click here.
The Import Visuals feature is available in all supported Amazon QuickSight regions – US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).
Amazon CloudFront announces enhancements to its standard access logging capabilities, providing customers with new log configuration and delivery options. Customers can now deliver CloudFront access logs directly to two new destinations: Amazon CloudWatch Logs and Amazon Data Firehose. Customers can select from an expanded list of log output formats, including JSON and Apache Parquet (for logs delivered to S3). Additionally, they can directly enable automatic partitioning of logs delivered to S3,select specific log fields, and set the order in which they are included in the logs.
Until today, customers had to write custom logic to partition logs, convert log formats, or deliver logs to CloudWatch Logs or Data Firehose. The new logging capabilities provide native log configurations, eliminating the need for custom log processing. For example, customers can now directly enable features like Apache Parquet format for CloudFront logs delivered to S3 to improve query performance when using services like Amazon Athena and AWS Glue.
Additionally, customers enabling access log delivery to CloudWatch Logs will receive 750 bytes of logs free for each CloudFront request. Standard access log delivery to Amazon S3 remains free. Please refer to the ‘Additional Features’ section of the CloudFront pricing page for more details.
Customers can now enable CloudFront standard logs to S3, CloudWatch Logs and Data Firehose through the CloudFront console or APIs. CloudFormation support will be coming soon.For detailed information about the new access log features, please refer to the Amazon CloudFront Developer Guide.
Amazon QuickSight launches Layer Map, a new geospatial visual with shape layer support. With Layer Maps you can visualize data using custom geographic boundaries, such as congressional districts, sales territories, or user-defined regions. For example, sales managers can visualize sales performance by custom sales territories, and operations analysts can map package delivery volumes across different zip code formats (zip 2, zip 3).
Authors can add shape layer over a base map by uploading GeoJSON file and join it with their data to visualize values. You can also style shape layer by adjusting color, border, and opacity, as well as add interactivity through tooltips and actions. To learn more, click here.
Layer map is now available in following Amazon QuickSight regions – US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo).
Amazon QuickSight now includes Image Component. This provides authors greater flexibility to incorporate static images into their QuickSight dashboards, analysis, reports and stories.
With Image component, Authors can upload images directly from your local desktop to QuickSight for a variety of use cases, such as adding company logos and branding, including background images with free-form layout, and creating captivating story covers. It also supports tooltip and alt text, providing additional context and accessibility for readers. Furthermore, it offers navigation and URL actions, enabling authors to make their images interactive, such as triggering specific dashboard actions when the image is clicked. For more details refer to documentation .
Image component is now available in all supported Amazon QuickSight regions – US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).
AWS Lambda announces Provisioned Mode for event source mappings (ESMs) that subscribe to Apache Kafka event sources, a feature that allows you to optimize the throughput of your Kafka ESM by provisioning event polling resources that remain ready to handle sudden spikes in traffic. Provisioned Mode helps you build highly responsive and scalable event-driven Kafka applications with stringent performance requirements.
Customers building streaming data applications often use Kafka as an event source for Lambda functions, and use Lambda’s fully-managed MSK ESM or self-managed Kafka ESM, which automatically scale polling resources in response to events. However, for event-driven Kafka applications that need to handle unpredictable bursts of traffic, lack of control over the throughput of ESM can lead to delays in your users’ experience. Provisioned Mode for Kafka ESM allows you to fine-tune the throughput of the ESM by provisioning and auto-scaling between a minimum and maximum number of polling resources called event pollers, and is ideal for real-time applications with stringent performance requirements.
This feature is generally available in all AWS Commercial Regions where AWS Lambda is available, except Israel (Tel Aviv), Asia Pacific (Malaysia), and Canada West (Calgary).
You can activate Provisioned Mode for MSK ESM or self-managed Kafka ESM by configuring a minimum and maximum number of event pollers in the ESM API, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. You pay for the usage of event pollers, along a billing unit called Event Poller Unit (EPU). To learn more, read Lambda ESM documentation and AWS Lambda pricing.