AWS – AWS Clean Rooms launches Spark SQL support with configurable compute size
Today, AWS announces the launch of AWS Clean Rooms Spark SQL, offering customers the ability to run custom queries using Spark SQL. With this launch, customers can create an AWS Clean Rooms collaboration using the Spark analytics engine, and support workloads of different sizes with configurable instance types at query runtime.
With AWS Clean Rooms Spark SQL, you can query large datasets with the commonly used Spark SQL dialect. AWS Clean Rooms Spark SQL provides enhanced flexibility to customize and allocate resources to run SQL queries based your performance, scale, and cost requirements. For example, customers can use large instance configurations to satisfy the performance needed for their complex data sets and queries, or smaller instances to optimize costs. This is the latest addition to the multiple analyses capabilities of AWS Clean Rooms, including SQL aggregation, list, and custom analysis rules, Clean Rooms ML, Differential Privacy, and no code analysis builder.
AWS Clean Rooms Spark SQL is generally available in these AWS Regions, and only available for the custom analysis rule. AWS Clean Rooms helps companies and their partners to more easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. Companies can deploy their own clean rooms without having to build, manage, or maintain their own solutions and without moving data outside of their AWS environment. To learn more, visit AWS Clean Rooms.
Read More for the details.