Governor friendly asynchronous processing from triggers

How to perform Governor friendly asynchronous processing from triggers without compromising bulk processing, even when initiated asynchronously. When records are created, edited or deleted sometimes you need to process the record data to make something else happen. This is something that may be needed for all sorts of different reasons, with all sorts of different results and in various contexts. Some examples:

  • When a Contact is created, an email is sent to that contact’s email address to perform validation that the Contact owns or has access to the given email address.
  • When an Opportunity is assigned a new Amount that takes the total for the Account over a certain threshold, generate an outgoing message to an external system.
  • When a new Account is created, its Billing Address is used to determine the Sales team to which it is initially assigned, using a complex algorithm to perform that mapping.
  • When a Location is created, a pool of Contacts is created listing those within a certain driving time from the Location.

There are several different points to consider when deciding how to implement this processing, such as:

  1. Can this be achieved using entirely standard mechanisms, such as rollup summary fields or validation rules?
  2. Can this be achieved with “low code”, e.g. using a record triggered flow?
  3. Is a callout necessary?
  4. Is the processing too complex or compute intensive for “low code” and therefore need “pro code”, i.e. implementation via an apex trigger?
  5. Does the processing need to result in changes to the current record, related record(s) or some other data in the database?
  6. Must the processing be performed immediately and synchronously with the initiating change?

Sat behind all these questions is the fundamental requirement:

Whatever this processing is, it must be done in a way that fits within Salesforce’s governor limits and technical restrictions

This article focuses on the scenario where the processing needs are complex, either computationally expensive or requiring callouts, and able to be performed shortly after the originating update rather than synchronously with that update. That means it should use some form of out-of-transaction processing initiated from an apex trigger. It is, however, important to ensure that this processing is only performed if the initial transaction succeeds.

Salesforce’s flawed solution for callout from a trigger

Callouts cannot be made from triggers; this is one of those technical restrictions applied by the Salesforce platform.

Salesforce provides a solution as to how to engineer your trigger to invoke a callout which you can read here. The concept is to simply push the initiation of the callout into a separate transaction by encapsulating it in a future method.

Let’s consider the platform limits that break this solution (more detail on the specific limits are found in reference [1]):

  • Triggers are called with up to 200 records at a time:
    • A transaction can include bulk DML on up to 10000 records, so your trigger could be called 50 times (50 * 200 = 10000).
  • A synchronous transaction can initiate 50 future methods:
    • That seems to fit, right? Your trigger should only be called 50 times if you insert/update 10000 records after all.
    • What if you actually had more than 50 separate, smaller DML operations against your object in your transaction? You are allowed up to 150 of these in a transaction, after all.
  • An asynchronous transaction can initiate 0 future methods:
    • If your trigger is called because of a DML operation that itself is initiated from within a transaction that is already asynchronous (i.e. from within a future method, a Queueable, a Batchable or a Schedulable) then you can’t use future methods at all.

This only works in very limited scenarios. It may work when you first implement your solution, but solutions change over time and you never know when someone may introduce DML operations against your records in an asynchronous context. What do you do then? Clearly you need to fix this problem and an obvious solution is to simply not call the future at all. But if you do that, the callout won’t happen.

Going out-of-transaction: available options

As we saw in the previous section, if you want to do a callout from a trigger, you need to take that callout out of your current transaction and put it in a different one. This also works for other use cases, such as performing CPU intensive processing or otherwise bucking against Salesforce transactional limits (like number of queries, number of queried rows etc.).

By going “out-of-transaction”:

  •  The initiating change is allowed to complete without waiting for this processing.
  • The out-of-transaction processing is given its own, separate governor limits. If performed in an asynchronous context this has many higher limits, such as increased CPU and heap space limits.

Note that I have said “out-of-transaction” instead of “asynchronous” processing.

By “out-of-transaction” I mean that the processing happens in a separate transaction against the Salesforce platform. There are two distinct options available, as part of the Salesforce platform, for initiating a new transaction from a trigger within an existing server-side transaction. You can:

  1. Initiate some asynchronous process using a Future method, a Queueable, a Batchable or a Schedulable.
  2. Publish a “Publish After Commit” style Platform Event and include a trigger-based event subscriber.

Be aware that all these options require the initiating transaction to complete successfully for them to subsequently execute (unlike the “Publish Immediately” style Platform Event).

These options have upsides and downsides, which are explored next.

Asynchronous pros and cons

All the asynchronous processing options (Futures, Queueables, Batchables, and, at a pinch, Schedulable) that can be used from triggers really do run in an asynchronous context, so benefit from increased governor limits [1].

They consume async executions that count towards a limit that itself is based on the number of user licenses you have, with a lower limit of 250000 per 24-hour period [1]. This sounds like a lot, but if your org has a lot of data and a lot of processing you really can run out.

Governor friendly asynchronous processing from triggers

The number of asynchronous processes that can be queued or scheduled in each initiating transaction is quite limited, even more so when the context is itself already asynchronous. In an async scenario you can only enqueue a single queueable instance, for example. If you’re already in a future method, you cannot invoke another future method.

When you combine the latter points with the fact that an apex trigger is invoked for chunks of at most 200 records in each DML operation, if your object is mass-updated from within some async processing or there are a large number of separate DMLs against your object in that one transaction then you cannot have your trigger enqueue a queueable to address the “out-of-transaction” processing need. You also cannot use a future method (since your trigger may be invoked in a future method too as covered in an earlier section).

Even in the synchronous scenario you are limited to enqueuing at most 50 Queueables per transaction and you could run out of these in some bulk or fragmented update scenarios (especially if triggers get recursively invoked).

All DML operations performed by this automation are attributed to the user running the initiating transaction.

Platform Event pros and cons

Trigger-based platform event subscribers run in a different transaction, effectively asynchronous compared with the initiating transaction, but interestingly count as synchronous invocations:

  • They do not consume asynchronous executions.
  • They are allocated synchronous processing limits during event processing.

A distinct issue with Platform Events is that they do not have guaranteed delivery. While it should be rare, you should be prepared for these events to go AWOL [2].

The one thing that they do consume, unlike asynchronous apex, is event notifications published per hour [3]. There are 250000 of these, per hour, so many more than you get in terms of async executions per 24 hours.

The number of (publish after commit) Platform Events that you can publish in each transaction isn’t directly limited since they are being consumed by apex. However, with an appropriate implementation approach, you only actually need to publish a single Platform Event; the event doesn’t need to contain any specific state (a benefit when you remember that these can get lost) when the records in the database hold that state instead. All the event need do is kick off the required processing.

Note, too, that if multiple transactions happen to publish Platform Events concurrently, these get grouped together and are sent to the subscriber in one go (or at least in chunks of 2000). The processor can simply discard “duplicate” events and do the processing required against just one of those events.

A given subscriber for a Platform Event is called with all events in the order in which they were published. Further, that subscriber is called in a single-threaded manner; there will never be two versions of the subscriber executing at the same time. This is ideal for scenarios where you want to guarantee that the extended processing for a given record is done once and only once, without fear of race conditions or multiple executions of the processing.

Note that this single-threading, and the slight delay between subscriber invocations by the platform, does mean there is a limit on the throughput. If this mechanism is used purely to handle callouts, knowing that a synchronous transaction can perform a maximum of 100 callouts if the delay is 1 second a maximum of 100 * 86400 (seconds per day), or 8640000 callouts can be performed per day.

All DML operations performed by this automation, assuming the platform event subscriber is implemented as an apex trigger, are attributed to the Automated Process user, or an explicitly specified user, rather than the user running the initiating transaction. If the subscriber is implemented as a flow, then the updates are attributed to the user initiating the transaction.

The Platform Event-based solution

Given the points made in the previous section, the most resilient approach that I’ve found for doing heavy lifting or making callouts in reaction to DML operations is using some appropriate flagging of records and a stateless Platform Event.

The design pattern applied boils down to:

  • Records get marked, by their trigger, as needing to be processed.
  • A minimum number of “publish after commit” Platform Events are published from that trigger, at most one per apex transaction (not trigger execution). This helps keep within the per-hour publication limits.
  • The Platform Events contain no uniquely valuable information, to avoid having problems if events do fail to be delivered successfully.
  • A Platform Event apex trigger subscriber processes as many records, marked as needing to be processed, as it can in one go, clearing that mark on the records in a non-contentious way, chaining on to process more records when there are some still unprocessed in the database.
  • The fact that a Platform Event subscriber receives platform events in the order they were published and that the subscriber isn’t called again with more events until it has finished processing the current set of events means that the processor does not need to worry about concurrent processing of a given record.

The approach detail is covered in the subsections below.

Tracking records to process

The Platform Events need to be stateless, so we can afford for them to get lost. That means we need instead to mark each record that needs to be processed so the Platform Event subscriber can find them again later.

Using a simple Checkbox field on each record could lead to contention and a failure to properly process a record when those records are being marked and processed/unmarked concurrently.

To be resilient to rapid update (ensuring that there’s no race condition or field update contention) it is best to ensure that the record is marked as needing processing in the trigger and marked as processed in the separated processor using different fields.

These are timestamp fields to allow them to be easily compared and to identify when one action follows the other. The first field is set when there is a need for processing and the second is set when processing has been completed.

It’s not possible to directly compare fields’ values in a SOQL query, but this is easily resolved by comparing the two fields in a third, formula, field. It is this formula field that is used in the SOQL queries used to find records to process.

The triplet of fields follows this pattern, where X is a “placeholder” for the name of the type of processing:

Field NameType/Details
XProcessingLastRequired__cDatetime
XProcessingLastPerformed__cDatetime
XProcessingRequired__cCheckbox formula: NOT(ISBLANK(XProcessingLastRequired__c)) &&(ISBLANK(XProcessingLastPerformed__c) || XProcessingLastPerformed__c <= XProcessingLastRequired__c)

Importantly the XProcessingRequired__c checkbox is only true if the record has been marked for processing but has not yet been processed. This is the value that is used by the platform event processing to select the records that need to be processed.

Record trigger handler responsibilities

The XProcessingLastRequired__c value is set to a transactional static variable in the record’s trigger handler, itself statically assigned to Datetime.now(), when the given record needs processing.

It also needs to publish a Platform Event. However, it only needs to publish at most one for the current transaction (regardless of the number of times the trigger, and therefore the trigger handler, is called).

This can be addressed by having a simple transactional static Boolean variable, initialized to false and set to true when a platform event is published. If the variable is true then no further platform events need be published. This flag is read in, and set by, the record’s trigger handler only.

Platform Event trigger handler responsibilities

When the events are received, all those representing the need to run the processing can be collapsed into a single event (discarding duplicates). We can do this because the events themselves are stateless; it is the database that carries the information about which records require processing. Duplicate events can simply be ignored.

The records to process are found by querying the records where:

XProcessingRequired__c = TRUE

The XProcessingLastPerformed__c value is set, on the record, to a separate transactional static variable in the platform event’s trigger handler, also assigned to Datetime.now(), once the record has been processed.

If the record cannot be processed (e.g. there are simply too many records and governor limits will be exhausted), the field remains as it was, and the record can be processed again next time. It’s important to avoid scenarios where failure to process a given record prevents any record processing, otherwise your processing may completely stall, and your processing implementation needs to take this into account.

A nice thing here is that the Platform Event trigger handler can either directly limit the number of records it processes in a single transaction using the SOQL LIMIT keyword or watch the various limits relevant to it (such as CPU time, number of queries/query rows, number of DMLs and/or number of callouts) by using the Limit class methods. That way it can stop trying to process records once it reaches a threshold against any one of these limits, or when all queried records have been processed.

All the handler needs to do after that is determine whether there are more records to process. If there are, it simply publishes a single Platform Event for itself to process in a separate transaction, guaranteeing that it will be “chained” even if no DML causes the trigger to be executed.

This chaining appears to be executed with pretty much exactly a 1 second interval each time. This means that there will generally be at most 3600 invocations of the processing per hour, well within limits. It does mean, however, that it is important to try to optimally process as many records in a single execution at a time.

Extending the approach

If you have multiple types of process that should be exclusively executed (i.e. not concurrently) you can extend the above pattern by adding a “type” field in the Platform Events and use this in the handler to choose the processing to be performed. This also requires a small enhancement of how to minimize the number of Platform Events published in a given transaction, turning a simple Boolean into a Set that holds the “types” that have been published.

While this introduces some “state” into the Platform Events, this state isn’t uniquely valuable. If an event goes missing, there will likely be another come along later. Since the database contains the required state for a given processor the records will still get processed, just a bit later than perhaps expected.

In terms of optimizing the processing, the code should probably over query the records to be processed and then leverage the Limits API [4] to track progress towards exhausting available limits. If any one limit gets close to exhausted, the code should commit what it has done so far and chain to a new invocation.

Handling record deletion is certainly possible using this pattern, though likely requires the use of a “deletion placeholder” object used to record those deletions. Deleted record state can likely be queried from the recycle bin rather than being replicated in the “placeholder”, which therefore likely only needs to hold the ID of the deleted record. It does require a different “processor type” to allow these placeholders to be processed accordingly.

References

  1. Governor Limits
  2. Platform Event Persistence
  3. Platform Event Limits
  4. Limits API

Governor friendly asynchronous processing from triggers. This document is supported by an implementation example in GitHub.

Summary

I hope Governor friendly asynchronous processing from triggers helped your. Please share your feedback in comment

Phil W
Phil W

Phil is a Product Architect for Bullhorn, a Salesforce Summit tier ISV Partner. He has been developing and architecting Salesforce AppExchange products since 2017, with a long career before that in product development and professional services in the defence, healthcare, telecommunications and workforce management spaces.

Articles: 1

4 Comments

    • Let’s hope they cleanly handle replay targeted at just the right subscriber! I can see this as a really nice feature, for my pattern, allowing different “types” to be processed concurrently. Some experimentation will be required to figure out what must be done.

  1. I really like this, but I’m still at the point of a simple checkbox with out the formula to calculate from the dates requested and done.

    With that solution, I’m hitting Salesforce KB 000386468 where queries returning a small subset of the total number of records for an object will start to fail because Salesforce expects you to have the query optimzed.

    Are we going to hit this error when searching for a formula based on the dates in your solution? I’m not allowed very much research time in my work day to just ‘try it and see’.

Leave a Reply

Your email address will not be published. Required fields are marked *