GCP – Introducing request priorities for Cloud Spanner APIs
Today we’re happy to announce that you can now specify request priorities for some Cloud Spanner APIs. By assigning a HIGH, MEDIUM, or LOW priority to a specific request, you can now convey the relative importance of workloads, to better align resource usage with performance objectives. Internally, Cloud Spanner uses priorities to differentiate which workloads to schedule first in situations where many tasks contend for limited resources.
Customers can take advantage of this feature if they are running mixed workloads on their Cloud Spanner instances. For example, if you want to run an analytical workload while processing DML statements, and you are okay with your analytical workload taking longer to run. In that case, you’d run your analytical queries at a LOW priority, signaling to Spanner that it can reorder more urgent work ahead if it needs to make tradeoffs.
When there are ample resources available, all requests, regardless of priority, will be served promptly. Given two requests, one with HIGH priority and the other with LOW priority, but otherwise identical, there will not be noticeable differences in latency between the two when there is no resource contention. As a distributed system, Spanner is designed to run multiple tasks in parallel, regardless of their priority. However, in situations where there aren’t enough resources to go around, such as a sudden burst of traffic or a large batch process, the scheduler will try to run high-priority tasks first. This means that lower priority tasks may take longer than in a similar system that wasn’t resource constrained. It is important to note that priorities are a hint to the scheduler rather than a guarantee. There are situations where a lower priority request will be served ahead of a higher priority request, or example, when a lower priority request is holding a transaction lock that a higher priority request needs access to.
Using Request Priorities
The Priority parameter is part of a new optional RequestOptions you can specify in the following APIs:
-
Read
-
StreamingRead
-
ExecuteSql
-
ExecuteStreamingSql
-
Commit
-
ExecuteBatchDml
You can access this newly added parameter if you are directly issuing requests to our RPC API, REST APIor via the Java or Go Client libraries, with the rest of the client libraries implementing support for this parameter soon.
The following sample code demonstrates how to specify the priority of a Read request using the Java Client Library
QueryOption queryOption = new PriorityOption(RpcPriority.LOW);
ResultSet resultSet = dbClient.singleUse().executeQuery(Statement.of(“SELECT * FROM TABLE”), queryOption);
Note: Even though you can specify a priority for each request, it is recommended that requests that are part of the same transaction all have the same priority.
Monitoring
Cloud Console reflects these new priorities in the CPU utilization, grouping metrics into HIGH and LOW/MEDIUM buckets.
In the screenshot above, at 5:08 there was a low priority workload that was running with no other competing workloads. The low priority workload was allocated 100% of the available CPU. However, when a high priority workload started at ~5:09, the high priority workload was served immediately and the low priority workload CPU utilization dropped to 60%. When the high priority workload completed, the low priority workload resumed using 100% of the available CPU,
Access this newly added parameter by issuing requests to our RPC API, REST APIor via the Java or Go Client libraries.
Read More for the details.