Lars Malmqvist

Lars Malmqvist

Lars is a 27x certified Salesforce CTA and has spent the past 12 years in the Salesforce ecosystem building advanced solutions on the platform. Currently, he works as an Associate Director in Accenture's Copenhagen office focusing on supporting large Nordic Salesforce clients in their transformation journeys. For the past five years, he has been focused on issues around using AI on Salesforce, combining this with academic research in deep learning and argumentation. Recently, he published a book on Architecting AI Solutions on Salesforce with Packt publishing

5 Key Architecture Concerns for AI Solutions

AI features are increasingly part of the remit of Salesforce Architects, but few understand how using these differs from classic architecture. This post explains five key concerns to bridge this gap.

Spring ’22 has recently hit orgs across the world. As always, it has included many great upgrades, but this time a momentous, but not widely understood change, has become apparent with a simple feature upgrade to the usually humble Surveys feature. 

The momentous feature is called 𝗚𝗲𝘁 𝗤𝘂𝗮𝗹𝗶𝘁𝗮𝘁𝗶𝘃𝗲 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝘄𝗶𝘁𝗵 𝗦𝗲𝗻𝘁𝗶𝗺𝗲𝗻𝘁 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 𝗼𝗻 𝗧𝗲𝘅𝘁 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲 and allows you to natively get a sentiment analysis score for text responses to your survey questions. While you have been able to use Salesforce’s cutting-edge sentiment analysis functionality for a while using Einstein Platform Services, this is different because it requires nothing special on your part. The AI driven capability is simply there out-of-the-box for you to use as with any other feature.

While it is not the first time Salesforce has embedded an AI feature in the core platform, this time it has been done without making a fuss about it. That is to say, a world-leading AI capability is being deployed to a standard feature as part of normal product development. 

What that means for architects working on Salesforce is simple. You can no longer avoid thinking about the implications of using AI, because AI is becoming part and parcel of the features you use every day. It is no longer a special project in the innovation portfolio, but something everybody needs to contend with.

However, most architects working on Salesforce today do not come from a data science or machine learning background. And the kind of technical architecture that we tend to do on enterprise systems does not transfer well to AI solutions in all cases. 

We need to shift our mindset slightly, when using these features or we risk getting our architecture and design wrong in subtle but important ways. First, we need to understand that AI features are not programs in the traditional sense, but rather models trained on data.

AI features are model-based

In traditional architecture and design on Salesforce or most other enterprise systems, we develop a feature to meet requirements using a combination of out-of-the-box features supplemented by declarative workflow logic in various forms and ultimately code, when we hit the limits of other features. 

This is a relatively clear-cut process: we know the aims, we know how to implement, we know how to test. At least in theory. AI features on the other hand are based on models. Overwhelmingly, these days, we generate such models based on statistical information extracted from datasets, but historically many have also been hand-crafted using a variety of approaches. 

In either case, we are trying to create or generate a model of a domain we’re interested in that can be used to infer something about problems relating to that domain. Usually, in an enterprise system context, that takes the form of making a prediction about the properties of new records based on historical data. 

For instance, given a field with some text and a set of historical data about the sentiment of different blocks of text make a prediction about how positive or negative the text in that given field is. That is effectively what the Surveys sentiment analysis feature does.

Whatever else models are, they are not as clear-cut as programmatic or declarative solutions. You don’t always know how well a model will work in various contexts or even how to test it effectively. If you are using someone else’s model such as those embedded in standard Salesforce features you may not even know the full specifications of that model.

Although Salesforce will provide you with a fair amount of documentation, it is still on you to make an informed assessment about the fit of that model to your requirements, how to validate it, and how to test it.

Models are probabilistic, not deterministic

The fact that the overwhelming number of AI features are based on statistical models derived from data has important implications. In traditional programming, we rely on the basic principles of sequence, selection, and iteration to specify an algorithmic solution to a problem. 

That solution may be complex, but it is deterministic. We always, in theory, know what should happen even if we may have trouble working it out in practice. That is simply not true for most machine learning models and the temptation to treat them as though they were deterministic is a major danger in architecting with AI features.

Let’s imagine that you package an AI feature to help determine which records to archive based on different properties such as status, age, activity, etc. with an aim to clean up your operational data. You release it as a flow element with a configurable threshold that represents the degree of certainty you want the model to have before it archives a record. 

You start conservatively with a high threshold, which works well, but gradually decrease it for a few messy objects. After a model upgrade, where you’ve incorporated some extra training data you get an irate call from a sales manager in your EMEA office saying that all his opportunities have been archived. 

You investigate, and sure enough they’re gone. Unfortunately, you have no idea what exactly shifted these opportunities across the line. You put in a call to your operations team for an emergency data patch and settle in for some very long hours of debugging. 

This unfortunately can happen very easily with machine learning models. Small changes to thresholds or training data can have large impact on your functionality. You need to make sure that you have the safeguards, tests, and procedures in place to ensure that the end-to-end process works and will continue to work. Simply treating them as you would any other building block is a surefire way to get yourself in trouble. 

Models are based on data

Your model is only as good as your data. That should be apparent from the discussion so far. But the other implication of that is that you should use a model that will work with the data you have available. 

When you use Salesforce Einstein features, they frequently specify a minimum amount of data for them to work at all and another number for them to work optimally. These limits are there to be respected and in practice you want to err on the size of too much, rather than too little data for the kind of model you are using.

Some simple models, regression models for instance, can work relatively well with a small number of data points. Others such as most deep learning models require massive amounts of data to be successful, although you may be able to fine-tune someone else’s model to your specific problem 

Equally, while more is generally more when it comes to data for machine learning, there can be specific cases where that is not true. For instance, if buying patterns have changed since last year, you may not want to include last years data in a model for making product recommendations today.

The takeaway is that you should pick a model that will work with the data you have available and be sure that the data you use is useful to the problem. This applies both when fine-tuning models based on small amounts of your own data and when creating a full model from shaft.

Model performance changes over time

Unfortunately training an appropriate model on appropriate data and deploying it in a well-thought out way is not the end of your troubles. Model performance is highly likely to change over time as the underlying data generated by reality changes. 

Things change so your model will also have to change. That means that as a minimum you need a strategy for monitoring model performance over time and be prepared to step in when necessary. 

However, in many cases, AI features contains ways to update themselves based on changing data and for any you create yourself, you are likely to at least consider a similar capability. Sometimes that can work really well. 

Unfortunately, there are also cases where reality changes enough that the basic assumptions underlying the model fail and you have to fundamentally change your approach. You need to make sure you have the setup to enable you to know if this is happening.

AI features have ethical implications

Finally, AI features bring with them a thorny nest of ethical issues that generally don’t apply to deterministic algorithms. 

The first big issue is opacity. While some simple statistical models like linear regression and decision trees can be easily explained, for the majority of models in common use understanding how a given decision was reached can be difficult if not impossible. That is often not good enough, when you are dealing with decisions that matter to people’s life and well-being. 

The second big issue is bias. Data in the real-world reflects the biases and discrimination endemic to that world. When we create models from that data, it will therefore reflect those biases. If we don’t take active steps to prevent it, the models that are generated from datasets will be just as biased as the datasets themselves. 

The good news is that there are many ways to improve on this state of affairs both at the level of the data and at the level of algorithms. Salesforce as a company is quite committed to avoiding bias in its models, so this is more a problem when you are creating your own models, but you should still check any appropriate use guidance that comes with the out-of-the-box models.

Other issues such as privacy, moral responsibility, and the degree of autonomy granted to artificial agents also present complications. These, however, fall outside the scope of what we can cover here. 

As an architect you need to be mindful of these issues and actively counteract them, whenever you are dealing with potentially ethically relevant decisions supported by AI features. These are increasingly coming to the Salesforce world as the platform expands into areas such as non-profit management, financial services, or public sector.

Carrying the mindset into practice

The five key concerns that we have covered in this article is not the end all and be all of architecting for AI solutions. However, if you take these lessons to heart, you will be aware of some of the most important issues that are relevant to architects when working in this area. 

As AI features become a more and more integrated part of creating solutions on the Salesforce platform, these issues will become more run of the mill and standard answers will emerge to handle them effectively. Until then, you need to remain aware and vigilant to the additional risks presented by AI features. The rewards, however, make that effort worthwhile.

Share this article

Leave a reply

Keep in Touch

Subscribe for Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 3,229 other subscribers

Search

Our Supporter

RECENT POSTS

Apex Hours

Apex Hours is one stop platform to learn Salesforce skills and technology

Join our Newsletter and get tips and tricks how to explore the salesforce for free!