

Salesforce Key Best Practices: Explained and Simplified
As a Salesforce professional, you have no doubt read articles or attended presentations on Salesforce Best Practices. We want to provide an updated set of key best practices that we not only highly recommend but also greatly simplify via our Best Practices Toolkit. Whether you are interested in this particular “Toolkit” product-suite, I believe you will find this best-practices material very useful for your job. Our Best Practices Toolkit is the result of over 12 years of intense Salesforce-related consulting. I was personally a billable Salesforce Program Architect at salesforce.com for almost 10 years (focused primarily on large and/or complex Salesforce Orgs). In addition, I spent some of that time as a SWAT-team member helping to quickly analyze and optimize CRM Orgs for medium-sized companies. Let’s explain and simplify Salesforce Key Best Practices.
In both jobs, I learned a great deal about what to do right (best practices), as well as what not to do (anti-patterns). Moreover, the Salesforce-provided products and clouds have continually grown in number and complexity, and we feel customers and consultants are in great need of tools to simplify dealing with all this complexity. This brings up a broader question: what is the cost of non-optimal implementations of software in general? I think the answer lies in the ROI analysis we did last year on implementing Best Practices in the context of CRM: the answer is approximately 150% ROI per year (from citation-based and validated information — more details in the summary). So if you are not utilizing best practices, it is costing your company lost revenue, increased per-project costs, unhappy customers and users, etc. Anyhow, let’s now get to this article’s more specific content and purpose!
Salesforce Key Best Practices: Explained and Simplified
The following section will discuss 7 Salesforce-specific best practices and how to solve them in general terms. We will also refer you to a simple, flexible, and cost-effective way to address all these challenges as you design, develop, and manage your Salesforce Org(s): the Platform Technology’s Best Practices Toolkit. More specifically, what follows are the topics we will discuss:
- Why (and how) you need to externalize Apex Trigger code into a Trigger Framework (critical reasons and benefits to doing this well)
- As Flows become more powerful, feasible, and easy-to-use, I will discuss some best practices and ways to gain control over the execution of those flows (e.g. to avoid recursion within a business transaction, or to disable them temporarily).
- How to deal with all the various errors that are happening in your system (proactively and automatically) — many of which you are not seeing!
- Test classes: How do you know when they begin failing (in PROD or FULL sandboxes)? Address this proactively before they impact project schedules!
- Rules Engine: Why is it nice to have a simple-to-use Rules Engine where you can externalize business logic that can change frequently rather than hard-code it in Apex or Flows?
- I will discuss the need for a management hierarchy in the platform and the benefits to your business processes and applications when leveraging this capability.
- The need for data deletion and archiving for stale data: Customers tend not to do this and/or not to do it cost-effectively.
Now, let’s dive into each of the above.
Apex Trigger Best Practices
Here is a summary of all the Salesforce Key Best Practices and guidance regarding Apex Triggers:
- Externalize your Apex Trigger code into Apex classes (via a “Trigger Framework”): the reasons we do this go back to computer science 101:
- Increase re-use via Apex classes (Apex Triggers cannot be reused “as is” in their legacy format). This class-level/method-level re-use is achieved via compartmentalization or helper-classes, utility-classes, virtual/inheritance classes, etc. You are basically breaking up the logic into its logical parts to: a) increase cohesion and b) decrease coupling. This is simply computer science terminology for: a) organizing your code well, b) reducing technical debt, c) increasing readability/maintainability, and d) increase reusability.
- Increase code quality due to requirement for code-coverage test classes
- Lay the ground-work for improving performance via separating code into different transactions (e.g. via Apex-Queueable classes, Platform-Events, @future methods, etc.). I have done this many times and it works incredibly well and makes for happy users/customers!
- Gain control over the execution of this Apex code (this is huge):
- Trigger Frameworks, when implemented correctly and with the right features, allow you to do many powerful things: a) avoid recursion (ideally at a fine-grained per-action level such as afterInsert), b) disable your Apex code for integration-users or ETL jobs — this can be temporary (time-based) or per-user based disablement, c) control order of execution of one class versus another for the same object/action. Note that our Framework is also extremely fast — executing in approximately 30 milliseconds (with Debug enabled).
- Enable diagnostics on how long this code is taking (per-action/method) so you can understand where some of your performance costs are, and understand how much time each Trigger-action/trigger-method is taking (in milliseconds/seconds). I did this recently and realized a customer has a prior junior developer that used recursion to implement a highly complex mathematical function! I was able to then quickly tweak the code to avoid recursion (e.g. via a loop or simple MATH function). This improved performance by an estimated 90% for this transaction! However, a more common anti-pattern is simply too many inefficient SOQL queries or highly inefficient algorithms implemented typically by developers who are not as experienced (or were rushed by their project deadline or agile-schedule).
An example shows how simple it is to use our Trigger Framework. Note that the trigger is just one line of code, and this handler class implements a single beforeUpdate method. This handler-class example happens to be invoking the

Rules engine to get a Decimal value back representing the Opportunity ROI estimate);
Flow Best Practices
Flows are automations that run in a similar manner to Apex Triggers (within a larger business transaction that the user or background job initiated). And so, just like Apex Trigger logic, the same flows can inadvertently get executed multiple times during these business transactions. The result is both poor performance and an increased likelihood for errors (a brittle system). In addition, like Apex Triggers, you will often want to disable the Flows during ETL jobs or other batch-type applications or data integrations. Finally, if you write a test class that causes a flow to be invoked and the flow sends an email, you will get an error when running the test class. As a result of these things, the best practice here is to gain more control over when your Flows are executed. This is the reason we built our FlowControl product. FlowControl gives you the following features:
- Ability to specify a recursion number (usually “1”) as far as how many times you want to allow this Flow to run within a single business transaction.
- Ability to disable flows temporarily for ETL jobs, other batch-type integration jobs, or Test class executions.
- Ability to disable the flow for a particular user (e.g. an integration-user).
- Ability to group flows together in a parent-child relationship (using Custom Metadata relationship) so you can quickly apply the above features to an entire set of Flows (e.g. for a particular object or particular scenario/use-case).
Finally, this is all achieved by simply: 1) leveraging our Toolkit-provided Flow templates, or 2) adding 2 elements to the beginning of your flows (if not using the templates), and 3) simply creating a custom-metadata record for the Flow(s). What follows is an architectural depiction of the Flow Control product.

Error Management Best Practices: How to Become an Instant Hero
Salesforce does so many things well; however, one thing missing in the platform is the ability to automatically coalesce/aggregate all your Salesforce errors into a single easily-accessible and actionable place. We address this need with our Best Practices Toolkit Error Management product. We provide several different realtime “listeners” waiting for a particular error to occur coming from one or more of those error-sources (e.g. Apex code, Flows, Batch programs, Apex-Exception Emails, Event Monitor alerts, etc.). We then support taking automated actions when a particular type (Category) of error occurs. The following actions are currently supported:
- Log the error to the ErrorLog object (defaults to True)
- Email an administrator
- Write a Debug message to the Debug Log (level specified here also)
- Throw an Exception (the current exception, if any, or you can specify a different/overriding exception)
- Rollback the Transaction
In addition, we provide an ErrorLog object and associated List-View as well as an ErrorLog-based LWC custom Dashboard page to summarize your Error activity. Finally, we also provide an Error_Handler_Event platform event that you can subscribe to using your integration middleware (e.g. Mulesoft). This allows you to easily send the detailed error information to your enterprise logging tool — to be aggregated with all your other enterprise applications (non-CRM applications). What follows is an architectural depiction of the Error Management product:

How to Proactively Manage Test Classes (and Why)
Test classes (and the 75% minimum code coverage requirement) are a huge benefit of the Salesforce Platform — ensuring that the quality of your deliverables is higher than your typical software deployment. However, they very frequently begin to fail over time (in Production and then in any refreshed sandboxes). The reason for this is due to the CRM team adding:
- new required fields,
- new validation rules,
- new Apex code that breaks the old code, and
- new Flows.
Any of these can then break the pre-existing Test Classes. As a result, you are supposed to continually monitor the success of your test classes, but frankly, almost no CRM customer does this. As a result, your CRM team’s projects often get delayed because the developer(s) have to go back and fix the Test classes they were depending on. This often delays each affected project significantly (doubling the time required if the developer is not the original author of the test class(es). This is why we developed our Test Class Manager product. The Test Class Manager is a batch job that runs nightly (or weekly – whatever you want), and finds any Test Classes that are failing in that Org (Production or Sandbox). It then provides detailed error information (the “why it failed”) to our Error Management product. The result is your CRM team immediately becomes aware that there is a Test class that needs tweaking (along with specific per-test error details) in order to: 1) avoid delays in future projects, and 2) ensure your test class coverage is adequate. This is an easy, efficient, and low-stress way to deal with these issues (trust me). Finally, the configuration for the Test Class Manager is in our ToolKit Settings custom-metadata configuration record.
Business Rules: Benefits, Use-Cases, and Making it Easy
Business Rules engines are frequently used in all types of software. For example, inside a CPQ product is a business rules engine. The purpose of these products is to allow business logic that can change frequently to be modified without updating code (or Flows). So the business logic/rules are changed via declarative record updates rather than code or visual-coding (Flows) changes. The challenge is that most business rule products (e.g. CPQ-type products) have two problems: 1) they are expensive, and 2) they are very difficult to use — even if you are just using the Business Rules engine itself and not CPQ. However, our Best Practice Toolkit provides a simple-to-use Rules Engine. Moreover, it leverages custom-metadata records also, so your changes are easily (re)deployed from Sandboxes to Production, etc. Via just one type of custom-metadata record, it supports multiple rule scenarios, and within each scenario, you specify one or more Rule-Groups. Each Rule-Group can then contain any number of rules and each one of these is AND’ed or OR’d together — to ultimately give you a TRUE or FALSE conclusion. If a Rule-Group is evaluated to be TRUE, the Rules Engine can do three things:
- Set a field value on the primary object, and
- Return a value per a calculation,
- Take an action such as “Archive” or “Delete” if the scenario is for our Data Archive/Delete product (described below).
Each Rule can then be configured as follows in a Rules custom-metadata record:
1. Rule Name
2. Active (true or false) – is the rule currently Active or Inactive
3. Scenario name
4. Grouping name (which Rule-Group does the Rule belong to)
5. Key Object Name
6. Seed Object Field (used in final Rule-Group result if TRUE)
7. Order (which the Rule should be evaluated within the Rule-Group)
8. Comparison Field Name in the form of <Object>.<Field> (can be any type of object)
9. Comparison (=, <, >, <=, >=, !=, contains, does-not-contain)
10. Comparison Field Value – mechanism for determining if a particular Rule is
TRUE or FALSE. These can be: numbers, booleans, dates, date-times, text
11. Operation (AND, OR, Add, Subtract, Multiply, Divide, Assign, Archive, Delete)
12. Result
13. Effective Date (optional) – determines when this rule takes effect
As a developer, in order to run the rules-engine, you simply invoke one method (as shown above in the OptyTriggerHandler class (the invocation below assumes it is returning a string):
String result = (String) pform.Rules.evaluate(‘ScenarioName’, objectList);
And you may pass in one or more records, and the list can consist of different types of object records (e.g. an Opportunity, Account, Case, etc.). The Rules are evaluated in numeric sequential order using the Order field (1, 2, 3, etc.). We support the following Operator types: AND, OR, Assign, Add, Subtract, Multiply, Archive, and Delete. If the Rule-Grouping Rules’ (AND/OR combinations) are TRUE, then final (non-logic) operation is executed (e.g. applied to the Seed Input Field (or if that operation is an action), and the action is taken and/or resulting value returned: Assign, Add, Subtract, Multiply, Archive, or Delete. This means it will use the Result field as the value to be Added, Subtracted or Multiplied (and for “Assign”, it will assign the Result value to the Seed Input Field.
The result of the operation on the Seed_Input_Field is then also returned using the Rules Engine’s evaluate() method. The data-types supported are: Text, Decimal/Numeric, Currency, DateTime, and Boolean. Finally, you can optionally use the Effective Date field to ensure that a Rule-Grouping does not take effect prior to a particular go-live date. If none of the Rule-Groups for this scenario are TRUE, then the evaluate method returns null. Note that if multiple Rule-Groups exist for a scenario (which is perfectly normal), then the first one to be TRUE is used. Finally, it should be noted that our Rules Engine executes extremely fast (approximately 25 milliseconds) per invocation/evaluation (i.e. per scenario).
Need for a Management Hierarchy and Associated Usage/Automation
Salesforce does not provide a management hierarchy for the Org users. They do provide a Role Hierarchy, but that is strictly to be used for record visibility sharing upwards in the organization, and cannot and should not ever be used to attempt to be a management hierarchy for “who reports to who” (e.g. for Approval Processes, Contract Lifecycle Management, Approval Signatures, etc.). As a result, Platform Technology provides a Management Hierarchy object (actually named “Approval Hierarchy” but the use-cases for it go far beyond just approvals). And it is backed by automation (Apex code) to support auto-populating relevant object records for the Approval Hierarchy scenarios that you configure. A real world example of this is how one of our customers implemented approval processes to approve the estimated ROI of each opportunity. They have two lines of management (Sales and Operations) that have to approve an Opportunity. So they configured two Approval-Hierarchy custom-metadata records:
Sales Custom Metadata:
Scenario Name: Sales
Active: true
Object Name: Opportunity
Seed Object Fields: Opportunity.OwnerId.User.ManagerId
Approval Process Name: <blank>
Skip Entry Criteria: false
Operation Custom Metadata:
Scenario Name: Operations
Active: true
Object Name: Opportunity
Seed Object Fields: Opportunity.Branch_Manager__c
Approval Process Name: <blank>
Skip Entry Criteria: false They then updated the Management Hierarchy object records with their relevant Sales and Operations management users in our Platform Technology Approval_Hierarchy object. Finally, in their OpportunityTriggerHandler for Opportunity updates, they simply invoke one line of code (our one-line Apex API) to automate the population of these users into the correct fields on that object. Note that the “Seed Object Field” is a metadata field to specify that object’s starting point for finding the initial person/manager in the hierarchy (the entry point into the hierarchy records). What follows is an architectural depiction of the management-hierarchy product:

The benefits of this product are the following:
- Automatically populate any object’s records the appropriate management personnel for your particular scenario (in the above example it was Sales and Operations for Opportunity Approval processes)
- When the personnel/management hierarchy changes, your approval processes (or other business processes are unaffected and do not have to be modified). In other words, no hard-coding of anything. And if someone (typically a manager) leaves the company or goes on vacation, you can simply make a permanent or temporary change to the Approval_Hierarchy record(s) affected.
- Gain an custom LWC page showing your entire management hierarchy by scenario for your company – as part of our managed-package hierarchy product.
Data Deletion and Archiving: Keep it Simple and Cost-Effective
Salesforce customers often fail to delete or archive their stale data in a timely manner. And this ends up costing them extra money due to expense of Salesforce standard and custom-object storage. As a result, we provide a data delete/archive tool that is easy to use and part of our Best Practices Toolkit. You can specify the object you want to have records deleted and/or archived via a custom metadata record. We store any archived data securely on-platform in Big Object records (JSON-serialized format of sObject record) at approximately 1/20th of the cost of normal storage, and you get the first 1 million Big Object records free with Enterprise and Unlimited Edition Orgs.
Below is a list of the custom metadata fields you may use to configure the behavior of the records chosen and the associated batch job that runs intermittently to do the deletion and/or archiving. It is worthy noting that we also added support for optionally using our Rules Engine to help narrow down the records that you decide to delete or archive.
Action: Delete or Archive (Unarchive is currently roadmap feature in QA)
Active: true of false
Admin Email: email of administrator to notify of the batch results
CronDelay: delay between batches if archiving/deleting multiple objects
Object Name: object to delete/archive
Where Clause: narrows down SOQL query used for which records to delete/archive
Rule Engine Per Record: requires an “Archive” scenario to be configured in Rules
Scope Size: batch scope size to use
Primary Parent Field: (Unarchive is currently roadmap feature in QA)
UnArchive Start Date Time: (Unarchive is currently roadmap feature in QA)
UnArchive Start End Time: (Unarchive is currently roadmap feature in QA)
UnArchive Where Clause: (Unarchive is currently roadmap feature in QA)
Note that you can currently Unarchive the BigObject records simply by querying the associated Archive__b object and downloading those records using an ETL or middleware tool and then inserting them into a database or into Salesforce.
Summary
Implementing Salesforce best practices is critical for the medium-term and long-term health of your Org (and to your team’s sanity). The many estimated average benefits are:
- Saves countless hours of frustration
- Allows your team to be 30% more productive
- Reduces development costs by 25%
- Increases quality by 50%
- Provides 20% faster time-to-market
- 20% increase in resource utilization
- 40% increase in both scalability and flexibility
- Lowers project risk by 35%
We all inherently know the lost opportunity cost and inefficiencies of non-optimal software, and CRM is no exception to this. In fact, due to the criticality of CRM in today’s world, it is even more critical to do it right! You can find our Best Practices Toolkit managed-package via the following links:
Web: https://platformtechnology.ai (click on Products page)
AppExchange: https://appexchange.salesforce.com/appxListingDetail?listingId=8be4fe75-ce34-48a4-94e3-3457bae50dda
LinkedIn: https://www.linkedin.com/company/platformtechnologies
Email: sales@platformtechnology.ai
Great post. Thank you. Are you sure record-triggered flows can execute multiple times in a single transaction? I don’t have documentation on this, but prior discussions with the product team suggest that they don’t.
What you do with the FlowControl product is really cool. However, most of that functionality, if not all, can be achieved with Custom Metadata on-off switches in the start element. No need to add an Apex action to the flow. Please see: https://salesforcebreak.com/2023/10/15/using-custom-metadata-types-in-flows-without-get/