Salesforce
Develop Scalable Solutions With Salesforce

Develop Scalable Solutions With Salesforce

While Solutioning, should we consider the scalability of the Salesforce Application? Should we even think about scalable solutions with Salesforce? These are very important questions from Salesforce architects. This post will answer these questions and we will see how to build scalable solutions with Salesforce.

What Is Application Scalability?

Scalability is the ability of an application to handle a growing number of users, clients, and customers. Application scalability guarantees that our Salesforce application will be capable of efficiently handling a high growth of its user base. Scalability makes sure that the customer experience remains the same regardless of how many users are added to Salesforce Org.

Why Should We Care About Scalability?

If our application is not scaled properly then it will adversely affect platform performance and eventually impact the trust of clients/customers. It might be possible that there will be downtime to handle that excessive load. This can be a huge financial loss for clients and for us also.

If we prioritize scalability as part of the system design, we can ensure a better user experience, lower costs, and higher agility in the long term.

Scalability Factors

In Salesforce, Scalability is also impacted by lots of data and how this data is structured. Below are four factors that affect scalability in the Salesforce application

  • Data Stored in Org
  • Transaction Complexity
  • Record Sharing
  • Clean and Bulkified Code

Let us see how we can handle these factors effectively.

Data Stored in Org

While designing the Salesforce data model we should consider many factors. Some of the important factors in scalable solutions with salesforce are

1. Numbers of records in the object

The number of records in an object can influence response time, especially during queries and DML operations. Objects can hold old data that is not required currently in Salesforce org. We should create an archive strategy for each object for backing up and retaining old data. We should keep only data that is required by the Salesforce system for different operations. The archived data can easily be exported to CSV and safely stored outside of Salesforce before deleting records from the Salesforce table. These archived data will also be deleted once the archiving period is over.

2. Data Skew

Data skew is the concept when there are more than 10,000 child records associated with the same parent. This will make our query slower, and affect the performance of list views, reports, and dashboards. To handle this situation we should create multiple parent records with different categories and then based on that category we should add child records.

2. Transaction Complexity

When the Salesforce application will be used by concurrent users it may run concurrent triggers, workflows, async batch jobs, and data sync jobs which will lead to locking scenarios. 

We should design transactions using a combination of low-code and pro-code approaches. We should consider low-code solutions like Visual Flow to meet UX requirements.  Pro-code solutions should be used to leverage automation. We should run non-business critical batch jobs and data sync operations at off-peak hours.

3. Record Sharing

Most organizations make default Organizational Wide Defaults (OWD) private. To share data they create different sharing rules. Sharing rule can impact the performance of the application as it will change the owner and sharing calculation is done during business hours which will make the application slower.

3. Clean and Bulkified Code

Our code should be clean, properly commented and it should always handle bulkification.

1. One trigger per object 

We should avoid an excessive number of triggers per object. Excessive use of triggers is difficult to maintain in a large enterprise application.

We should avoid a trigger and if it is required then create one trigger per object. Create a logic in the trigger so that we can disable it when it is required. We should disable triggers during large data loads if possible.

Implement a helper class for each trigger. Bulkify helper classes and methods, as well as triggers to process up to 200 records for each call.

2. Keep business logic out of triggers

We should not put actual business logic directly into our triggers. It will increase the complexity to create individual unit tests for each piece of code. This leads to messy, difficult-to-maintain code. Logic-filled triggers also violate the single responsibility principle and encourage the creation of non-reusable code.

We should create a logic-less trigger pattern. There are many ways to implement logic-less triggers, but the most popular is to use the trigger handler pattern.

3. Smaller Class

Break bigger classes into smaller reusable classes. We can implement an Nlineaxis design pattern which is available at Nlineaxis Enterprise Patterns Open Source. Reusable classes help us in reducing redundant code and help us in reducing lines of code. This way our application will not exceed the governor limit.

4. Use Custom Setting and Custom Metadata

We should use custom settings and custom metadata types as much as possible. Both are cached and it will avoid the governor limit. Check out the blog Custom Setting and Custom Metadata Type for more detail about their benefits and usage.

There are lots of other best practices for making our application faster and more scalable. Check out our other blog Optimizing Salesforce Nlineaxis Code for more detail.

Author

Divya Srivastava

Leave a comment

Your email address will not be published. Required fields are marked *