In this post I want to provide an insight to the minds of a Quality Assurance Engineer when it comes to software testing in relation to Salesforce.com applications. I also want to highlight gaps in testing that I have observed across the last 14 years and provide some insight into what your test teams should be focussing on, and how often.
Firstly, while I’m not a QA engineer I have earned plenty of scars from being involved in IT over 30 years and having worked my way up from the bottom. When you’ve had to answer customer calls about software bugs you get a much better understanding of the end user viewpoint
Seems obvious, so let’s not spend too much time on this, but let me rephrase this as “Why Manual AND Automated Testing Matters”. That may surprise you to hear from the CTO of a test automation company, but the level of automation you aim to implement should not be 100% because there is also value in manual testing. We’ll explore this further later
For many consultants, whether you consider yourself a developer or an admin, testing is where you find out all that fantastic stuff you wrote either
- Doesn’t work as intended
- Does exactly what the story acceptance criteria said, but doesn’t do what your stakeholder/users really wanted
- Works exactly as expected and exceeds all expectations every time
The good news is that if you answered 1 and/or 2 then you’re not alone: this is normal and easily fixed. If your answer was 3, then congratulations and I’m glad to hear you’re performing first-class application development and testing. Tweet me @RichClark808 to tell me how you got there without ever experiencing 1 or 2 first!
A commonly used phase is to ‘test early and test often’. I’m not talking Test Driven Development (TDD) here, though that is a valid practice for many teams to follow. I mean something much simpler, and that is the value of fixing bugs early which has a truly astonishing payback:
When to Test?
The when is more easily answered by classical testing models, and which you follow largely depends on your software delivery methodology. I’m going to assume you’re all following some kind of agile process, but much of this applies equally to a waterfall approach.
Firstly there are four main reasons to initiate tests on Salesforce:
- Change Management
- Almost every Salesforce org undertakes regular changes to enhance their system, take advantage of new features or expand usage to new teams. Salesforce has been incredibly successful at organic growth because it is so easy to change and delivers recognisable business benefits. These changes should be tested before being applied to a production org or enabled for all users in the case of a report, list view or dashboard.
- Release Testing
- Salesforce deploy three releases per year and for each they always provide a Pre-Release and a Sandbox Preview phase. This is to allow you to find issues before they hit your production orgs! Use it wisely.
- During release testing you should focus primarily on full Regression Testing first before you implement any changes and then repeat that testing after. Consider those users recently impacted by the change in requirements for users to have permissions granted for Apex AuraEnabled methods. After enabling those permissions a full regression test should be undertaken to check if there were further changes to make.
- In addition to the three releases per year, Salesforce also deploy weekly patches to fix known issues. These rarely have a negative impact, but you may want to make use of regression tests for regulatory compliance or to mitigate business risks. You could have a workaround for a bug that gets fixed which breaks your specific configuration.
- Release Update Testing
- A new name (was called Critical Updates previously) but Salesforce are providing more and more features, both functional and non-functional through release updates or feature flags. These let you apply a change, test the impact and roll that change back in a convenient mechanism. The changes are normally targeted to be auto-activated in a future release, but you have more time to check the impact.
- During release update testing you should focus on Regression Testing but you should narrow this down to the areas of primary impact first. You should save your full regression test once you’re happy the change has had the expected impact.
- You should always test release updates on a sandbox before enabling on a production org. You have plenty of time so use it wisely not the day before the deadline.
- Integration Testing
- Many companies have their CRM at the heart of their business and have integrated other systems to rely upon the data. If you’re making changes to those ancillary systems then you must ensure that your integrations remain operational and working as expected on a regular basis, daily or even hourly.
In addition to the above trigger points for running tests there are multiple test phases to consider in a typical change lifecycle. The primary phases of testing to consider on a Salesforce project or BAU team are:
- Design Testing
- Rarely done in Salesforce implementations unfortunately, but when embarking on large changes or projects, or delivering a new application, providing a clickable prototype and allowing users to provide feedback will quickly highlight usability issues and navigation dead ends in your application. For a salesforce change this could be via giving early access to a scratch org or sandbox with your changes before you add business logic, processes, or code. Image wireframes can easily be inserted as HTML elements as placeholders.
- Unit Testing
- Most people talking about unit testing on Salesforce are (unfortunately) obsessed with passing Apex code coverage and ideally adding meaningful asserts and limit checks. So when I say unit testing I’m talking about the process of someone who is making a change testing that it works as they expect before handing the change over. This includes both code and low-code/declarative changes.
- Unit testing should always be undertaken by the person making the change and should focus primarily on the item being changed, it should include the happy day path (i.e. acceptance criteria) and some limited lateral thinking about alternate scenarios.
- System Testing
- Most people will be familiar with system testing which generally involves a test resource undertaking some testing of the changes made to ensure they comply with either acceptance criteria and/or an agreed test plan of scenarios and variations to follow. System testing should do more than just check it works on the happy path, it should also explore the boundary conditions of alternate flows through the application and regression testing that existing functionality has not been compromised by accident.
- Regression Testing
- Regression testing should ideally be an automated process kicked off on a scheduled basis every day, and at least as part of a Continuous Delivery (CD) pipeline. If you don’t have an automation tool or CD pipeline then you can at least utilize manual testing in place for supporting Salesforce release windows and internal project deliveries. The scenarios and test scripts can then be automated later and run more regularly to catch issues earlier.
- Regression testing is a key requirement for finding unintentional errors or changes in behaviour. Within system testing the user may not experience an error but they would not necessarily realise an event wasn’t fired and an external system such as Oracle or SAP was not updated.
- Many project teams have come unstuck after implementing a successful initial roll out of Salesforce as they did not factor in the time or costs for manual regression testing into their phase 2 project delivery. This is known as the Regression Gap and can be significantly reduced with automated testing
- Smoke Testing
- Often used in lieu of a full regression test, smoke testing is used to validate a change has been deployed. This can be manual (e.g. deploy a new Flow version and check it’s working, oops, go and activate it, retest, yay), or automated using a subset of your regression tests that focus on key business processes that were under change. Smoke tests can be included in Continuous Integration processes but need to be pared down to ensure they do not slow down your CI process and limit changes.
- User Acceptance Testing (UAT)
- Scarily I’ve been on projects where the customer has said they don’t have the resources to undertake UAT and just asked for evidence from our system testing phase. There’s a whole presentation to give on that subject and avoiding that situation.
- During UAT selected users from the business should spend a dedicated amount of time and follow scripts plus undertake some exploratory testing. They should avoid just retesting everything that has been system tested already but may wish to duplicate if they do not have evidence from earlier tests as they will ultimately be committed to signing off the delivery.
- Remember earlier where I pointed out a perfect delivery may deliver everything that has been asked but doesn’t actually do what the business/stakeholder needs? UAT is there to catch those situations and if system testing is perfect (impossible in my opinion) then UAT should only identify gaps in the original requirements, or consequences that may not have been thought of. For example, imagine you implemented a change to auto-populate line items on an Opportunity but did not give your users an option to delete them all and recreate when they make an error?
Who Should Test
I’ve been repeatedly surprised when team members are shocked to be asked to take ownership for signing off on a feature they’ve developed or requested. To simplify things I suggest the following without going into a full RACI matrix here. At each stage of testing the following roles should be responsible for completing each stage of testing:
|App Designer||Dev/Admin||Tester||QA Engineer||Devops Eng.||Product Ownr|
Where to Test?
When it comes to the ‘Where’ most companies I’ve worked with are quite mature in this and the days of not having a Sandbox to work with should now be quite rare. From Enterprise Edition upwards you should be able to follow the minimum setup above, albeit with Dev sandboxes where insufficient DevPro, Partial Copy or Full sandboxes exist. There’s two main approaches, org driven and source driven development, but they’re broadly aligned when it comes to testing.
If you’re following a source driven approach then you should be completing all your unit testing for a change/story in a Scratch Org before pushing changes and making a pull request and commit in your source repository. You can continue using a Scratch Org for running all unit tests, or even for system testing of multiple stories as part of an epic or sprint validation.
Likewise, if you’re following a more traditional org driven development model you’ll be using one or more Dev sandboxes to create your changes and then pushing them through to, I would suggest, a Dev Pro sandbox which has a subset of test data already provisioned. Depending on the amount of data you need you may need to use a larger sandbox to complete your System Testing. After completing your system testing your changes should be ready to be part of a release candidate, and here our approaches come together, though how we deploy may be different.
Regardless of the tools you use to deploy, you ultimately have a release artifact that you deploy and that you want to undertake Regression Testing on. Your regression environment should be as complete as possible, ideally using a connected suite of staging systems to provide an end to end test environment. Regression may uncover issues with the release artifact itself in terms of missing changes, changes not deployed to ancillary systems, or changes that are incompatible or break existing functionality. Typically you’d cycle through more than 1 release artifact and multiple testing iterations before you’re ready to involve end users for UAT.
For UAT purposes I recommend you use a Full Sandbox, or at least a Partial Copy with representative data copied from Production, ideally with redacted data using Data Mask for example.
Following any final fixes (that go through the same iterative stages as above) and after you’ve pushed your final release artifact to your Production environment, then you may also want to Smoke Test that your changes were deployed correctly. This may include specifics checks such as validating your new Flows and Lightning Flexipages are activated, things often left to post deployment manual steps. We don’t normally want to undertake actions on live data that would be destructive or use real data records, so you may have some specific test data used for this purpose that avoids outbound interactions to real users or to real systems.
What to Test?
Almost everything up to now has been fairly generic within reason. This is where stuff gets completely Salesforce specific. What to test is dependent on a number of factors, such as your businesses appetite for risk, regulatory compliance, the quality of solutions being delivered into testing and most of all the phases of testing I highlighted above. The value you get out of testing, and the value you get out of automating some of those tests, can easily be measured.
I’ve split the following out into a table to indicate a suggested priority of each category of change that may occur against the testing methods above to summarize my recommendations. I’ve grouped changes where the same set of recommendations apply.
|Type of Change \ Testing Phase||Design||Unit||System||Smoke||UAT||Regression|
|Salesforce Release Sandbox Preview||n/a||n/a||n/a||Low||n/a||High|
|Enable Salesforce Release Update (Critical Update)||n/a||High||High||Low||n/a||High|
|Salesforce schema changes, including new fields, objects or changes to field permissions||Low||High||High||Medium||Medium||High|
|Changes to business processes, including workflows, process builder, non visual flows and assignment rules||Low||High||High||High||High||Medium|
|Changes to communications for use internal use only, including copy and merge fields such as Email Templates. Notifications, Chatter posts, Quip Integration etc||Low||Medium||Low||Medium||Low||Medium|
|Changes to communications for external use, including copy and merge fields such as Email Templates, Notifications, Chatter Posts, marketing automation rules and generated documents||High||High||High||Medium||High||Medium|
|Changes to UI elements including page layouts, lightning flexipage layouts, visualforce page edits, lightning components, visual flow components||Low||High||High||Low||Medium||Medium|
|Changes to reports and dashboards or analytics tools including Wave and Tableau||High||Medium||Medium||Low||High||Low|
|Changes to apex code including apex triggers and web services||Low||High||High||Low||Medium||High|
|Changes to org security including certificates, single sign on, or changing a connected app permission||Low||High||Medium||Low||Low||High|
Another way to interpret this table is the relative level of risk if you don’t undertake testing for these types of changes at each potential testing stage. You can interpret that as a risk of incurring rework and a risk to your business in terms of employee or customer churn.
What NOT to Test?
As important as knowing what to test I’d like to call out some items that you should not create tests for in my opinion. Now we do see customers add tests for these items, and for customers in regulated industries this is required, but in the majority of cases you’ve purchased a subscription for an app that’s already tested and you do not need to test the following:
- Navigation between console tabs and sub-tabs – unless you’re looking to assert that related objects are deployed as sub-tabs or primary tabs
- Salesforce Related list View All links
- Changing between Apps and Tabs via the App Launcher. Tools like Provar do this app switching for you, testing it as a UI delivery is already done by Salesforce.
- List view filtering and sorting – again unless you’re using this for a specific test scenario.
- Utility bar recent items, history, notes etc – just stick to any quick actions you create
- Chatter interaction – unless you have customization specific to posting, liking or commenting on a post.
This list is far from exhaustive, but hopefully you get the idea. Focus your efforts first on the areas you’re customizing. There is no value in testing that a vanilla Salesforce org behaves as a vanilla org should, since Salesforce have invested $millions already in that
How to Test?
The following table are the suggested ways and tools for testing at different phases in the lifecycle :
When it comes to converting your manual tests to automated tests it’s also very important to consider ways you can optimize tests when you’re not reliant on a human or just UI interaction. A big mistake many teams make is to try and automate a manual test script without identifying smarter ways of doing this. For example, if you have key business processes or validation rules you want to test it’s very easy to test these with an API call early in your test cycle and split that validation from checking if a field is visible to a certain group of users, or remains ‘above the fold’ on key screens.
With tools like Provar you can mix API and UI interactions in the same test case, plus create startup and teardown tests to execute automatically for when a test cycle is started. This is a powerful way of both populating your test environment with your test data and ensuring that it is cleared out afterwards, or to synchronize your test results back into your favourite lifecycle management tool, be it ALM, Jira or even Salesforce
For those of you unfamiliar with QA in general, or new to testing on Salesforce specifically, I hope you’ve found this blog useful for guiding the types of testing you should be considering, and when you might utilize an automated testing tool.
Richard is CTO at Provar Testing, the #1 Salesforce Test Automation solution provider. He’s been working in the Salesforce ecosystem since 2007, is a six-time Dreamforce speaker, plus a regular World Tour and Community events speaker. Prior to joining Provar Testing Richard was CTO at two UK System Integrator Platinum partners plus founder and CTO for two Salesforce ISV partners, delivering integrated solutions both on and off the AppExchange. He recently stepped down as co-organizer of the London UK Salesforce Developer User Group after 8 years but remains actively involved in the wider Salesforce community.