

Apex Cursors for Scalable Data Processing in Salesforce
Working with large amounts of data in Salesforce is tricky. The platform has governor limits (rules that prevent you from using too many resources), so we need to be careful with how we process big data.
For years, we’ve used Batch Apex to process records in parts (called batches). It works well but has some limitations like being complex to manage and not very flexible.
Now, Salesforce has introduced Apex Cursors (Beta) , a new way to process lots of data in a simpler and more scalable way.
What Are Apex Cursors?
Apex Cursors allow you to break SOQL query results into manageable chunks that can be processed within a single transaction. Unlike Batch Apex, cursors are stateless and provide fine-grained control over data traversal. Key features include:
- Incremental Data Fetching: Retrieve records in configurable chunks (e.g., 200 records at a time).
- Bidirectional Navigation: Move forward or backward within the result set.
- Integration with Queueable Jobs: Combine with chained Queueable Apex for asynchronous, scalable processing.
- High-Volume Support: Handle up to 50 million rows per cursor.
Key Benefits Over Batch Apex
- Stateless Architecture: No need to manage state variables, simplifying code logic.
- Flexible Chunk Sizes: Dynamically adjust the number of records fetched per transaction.
- Reduced Overhead: Avoid Batch Apex’s start/execute/finish lifecycle complexity.
- Error Resilience: Retry transient failures (TransientCursorException) without restarting the entire process.
How Apex Cursors Work:
- Cursor Initialization: Create a cursor using Database.getCursor(query) or Database.getCursorWithBinds() for dynamic SOQL.
Database.Cursor locator = Database.getCursor('SELECT Id FROM Contact WHERE LastActivityDate = LAST_N_DAYS:400');
- Fetching Records
Use Cursor.fetch(position, count) to retrieve records starting at a specific offset:
List<Contact> scope = locator.fetch(position, 200); // Fetch 200 records starting at "position"
- Tracking Progress
Maintain a position variable to track the offset for subsequent fetches.
- Termination Condition
Use Cursor.getNumRecords() to determine when all records have been processed.
Queueable Apex with Cursors
Below is an annotated example demonstrating Apex Cursors in action:
public class QueryChunkingQueuable implements Queueable {
private Database.Cursor locator;
private Integer position; // Tracks the current processing offset
// Constructor initializes the cursor and resets position
public QueryChunkingQueuable() {
// Step 1: Initialize the cursor with a SOQL query
locator = Database.getCursor('SELECT Id FROM Contact WHERE LastActivityDate = LAST_N_DAYS:400');
position = 0; // Start at the beginning of the result set
}
// Execute method processes records and enqueues subsequent chunks
public void execute(QueueableContext ctx) {
// Step 2: Fetch the next batch of records (e.g., 200)
List<Contact> scope = locator.fetch(position, 200);
position += scope.size(); // Update the offset
// Step 3: Process records (e.g., archive, delete, or transform data)
// ... Add your business logic here ...
// Step 4: Enqueue the next chunk if records remain
if (position < locator.getNumRecords()) {
System.enqueueJob(this); // Re-enqueue the same job for the next batch
}
}
}
Key Code Explanations:
Cursor Initialization
- The cursor is created in the constructor to ensure it’s available throughout the job’s lifecycle.
- The query SELECT Id FROM Contact WHERE LastActivityDate = LAST_N_DAYS:400 targets records with activity in the last 400 days.
Dynamic Offset Management
- position starts at 0 and increments by the size of each fetched batch.
- This allows the next fetch() call to retrieve the subsequent chunk.
Chaining Queueable Jobs
- By re-enqueueing the same job (System.enqueueJob(this)), the process continues until all records are processed.
- This pattern avoids hitting transaction limits (e.g., heap size, CPU time).
Best Practices for Using Apex Cursors
- Opt for Small Chunks: Fetch 100–200 records per transaction to stay within governor limits.
- Handle Exceptions Gracefully:
- Retry on TransientCursorException (e.g., network issues).
- Log and halt on FatalCursorException (e.g., invalid query).
- Monitor Limits:
- Use Limits.getApexCursorRows() to track daily usage against the 100 million row aggregate limit.
- Stay under 10 fetch calls per transaction (Limits.getFetchCallsOnApexCursor()).
- Combine with Asynchronous Tools: Queueable jobs, future methods, or Platform Events can orchestrate complex workflows
When to Use Apex Cursors vs. Batch Apex
Apex Cursors (Beta):
- Stateless processing where tracking offsets manually is manageable.
- High-volume datasets (up to 50 million records) requiring incremental processing.
- Dynamic batch sizes (e.g., adjust chunk sizes per transaction).
- Bidirectional navigation through query results (forward/backward).
- Chained async jobs (e.g., Queueable Apex) for scalable, non-linear workflows.
- Transient error recovery (retry failed chunks without restarting the entire job).
- Flexible governor limit management (e.g., avoid hitting heap/CPU limits with smaller chunks).
Batch Apex:
- Stateful operations requiring lifecycle methods (start, execute, finish).
- Smaller datasets with predictable processing patterns.
- Fixed batch sizes (200 records by default, customizable up to 2,000).
- Production-grade stability (GA feature, unlike beta Apex Cursors).
- Built-in retry mechanism for failed batches (via Salesforce retries).
- Simpler use cases where manual offset tracking is unnecessary
Conclusion
Apex Cursors are a powerful new tool for working with big data in Salesforce. They offer more control, better performance, and easier error handling compared to older tools like Batch Apex.
Although it’s still in beta, it’s worth exploring, especially if you deal with millions of records. Just remember to follow best practices and monitor your limits. Happy Coding 🙂
Do you think with Queueable apex if batch size is small and there is huge number records to process we can hit maximum queue chaining limit