Best Practices

Naming and architectural conventions that are considered best practice

Field Conventions

General conventions about field creation, grouping, naming, etc.

Field Conventions

General Conventions

All Naming Conventions Are RFC 2119 and RFC 6919 compliant.

  1. All field API names MUST be written in English, even when the label is in another language.
  2. All field API names MUST be written in PascalCase.
  3. Fields SHOULD NOT contain an underscore in the fields name, except where explicitly defined otherwise in these conventions.
  4. Fields generally MUST (but you probably won't) contain a description.
  5. In all cases where the entire purpose of the field is not evident by reading the name, the field MUST contain a description.
  6. If the purpose of the field is ambiguous, the field MUST contain a help text. In cases where the purpose is clear, the help text COULD also be defined for clarity's sake.
  7. Field API names should respect the following prefixes and suffixes.2 Prefixes and Suffixes SHALL NOT be prepended by an underscore.
Field Type Prefix Suffix
MasterDetail
Ref
Lookup
Ref
Formula
Auto
Rollup Summary
Auto
Filled by automation (APEX)1
Trig
Picklist or Multipicklist
Pick
Boolean Is or IsCan3

1 Workflows, Process Builders and Flows are not included in this logic because these automations either allow field name modifications with no error, or can be modified by an administrator. If fields are created for the sole purpose of being filled by automation (e.g. fields that will be used in roll-up summaries), a consultant WOULD PROBABLY use the Trig suffix anyway, to indicate that users cannot set the data themselves.

2 While norms for other field types were considered, e.g. to make sure number, currency and percentage fields were easily recognizable, they were discarded as being too restrictive for an admin. Fixing type mismatches in this case is easily solved by casting the value to the correct type using either TEXT() or VALUE() functions.

3IsCan replaces "Can", e.g. CanActivateContract becomes IsCanActivateContract. This is to enable searching for all checkboxes on a page with a single query.

Field Conventions

Grouping fields

  1. If the organization is home to multiple services, the field API name SHOULD be prepended with the name of the service that required the field, followed by an underscore.
    • This MUST NOT be the case if there is only one service using the object.
  2. If several services use the field, or the field was originally required by a service before being used by others: the field API name MUST (but you probably won't) be prepended with the name of the service that is responsible for the user story that lead to the field creation, followed by an underscore. The Description of the field MUST indicate which services use the field.1
  3. In the case the field is use differently by different services, the Description of the field MUST contain a summary description of each use.
  4. If a field is created to host a value for technical reasons, but is not or should not be displayed to the users, the API name MUST be prefixed with TECH and an underscore.
  5. If more than 50 fields are created on an object, a consultant SHOULD consider using prefixes to group fields in the same manner as technical fields, in the format of $Groupname followed by an underscore.

Examples

Object Field type Comment Field Label Field API Name Field Description
Case Lookup Looks up to Account Service Provider ServiceProviderRef__c Links the case to the Service Provider who will conduct the task at the client's.
Account Formula Made for the Accounting department only Solvability Accounting_SolvabilityAuto__c Calculates solvability based on revenue and expenses. Sensitive data, should not be shared.
Contact Checkbox   Sponsored ? IsSponsored__c Checked if the contact was sponsored into the program by another client.

 

1 While modifying API names post-deployment is notoriously complicated, making sure that field are properly recognizable is better in the long term than avoiding a maintenance during a project. Such modifications SHOULD be taken into account while doing estimations.

Workflow Conventions

Naming and structural conventions related to workflows

Workflow Conventions

Workflow Triggers

Workflow Rules (along with Process Builders) are now on a deprecation / End-of-Life plan. Existing Workflow Rules will continue to operate for the foreseeable future, but in the near future (Winter 23) Salesforce will begin to prevent creating new Workflow Rules. Establish a plan to migrate to Flows, and create any new automation using Flow Builder.

These naming convention best practices will remain in place for reference purposes so long as Workflow Rules may exist in a Salesforce org

A workflow trigger MUST always be named after what Triggers the workflow, and not the actions. In all cases possible, the number of triggers per object SHOULD be limited - reading the existing trigger names allows using existing ones when possible. Knowing that all automations count towards Salesforce allotted CPU time per record, a consultant SHOULD consider how to limit the number of workflows in all cases.

  1. All Workflow Triggers MUST contain a Bypass Rule check.
  2. A Workflow Trigger SHALL always start by WF, followed by a number corresponding to the number of workflows on the triggering Object, followed by an underscore.
  3. The Workflow Trigger name MUST try to explain in a concise manner what triggers the WF. Note that conciseness trumps clarity for this field.
  4. All Workflows Trigger MUST have a description detailing how they are triggered.
  5. Wherever possible, a Consultant SHOULD use operators over functions.

Examples

Object WF Name Description WF Rule
Invoice WF01_WhenInvoicePaid This WF triggers when the invoice Status is set to "Paid". Triggered from another automation. !$User.BypassWF__c && ISPICKVAL(Status__c, "Paid")
Invoice WF02_CE_WhenStatusChanges This WF triggers every time the Status of the invoice is changed. !$User.BypassWF__c && ISCHANGED(Status__c)
Contact WF01_C_IfStreetBlank This WF triggers on creation if the street is Blank !$User.BypassWF__c && ISBLANK(MailingStreet)
Workflow Conventions

Workflow Field Updates

Workflow Rules (along with Process Builders) are now on a deprecation / End-of-Life plan. Existing Workflow Rules will continue to operate for the foreseeable future, but in the near future (Winter 23) Salesforce will begin to prevent creating new Workflow Rules. Establish a plan to migrate to Flows, and create any new automation using Flow Builder.

  1. A Workflow Field Update MUST Start with FU, followed by a number corresponding to the number of field updates on the triggering Object.
  2. A Workflow Field Update SHOULD contain the Object name, or an abbreviation thereof, in the Field Update Name.1
  3. A Workflow Field Update MUST be named after the field that it updates, and then the values it sets, in the most concise manner possible.
  4. The Description of a Workflow Field Update SHOULD give precise information on what the field is set to.

Examples

Object FU Name Description
Contact FU01_SetEmailOptOut Sets the Email Opt Out checkbox to TRUE.
Invoice FU02_SetFinalBillingStreet Calculates the billing street based on if the client is billed alone, via an Agency, or via a mother company. Part of three updates that handle this address.
Contact FU03_CalculateFinalAmount Uses current Tax settings and information to set the final amount

 

1 While Field Updates are segregated by Object when viewed through an IDE or through code, the UI offers no such ease of use. If this is not done, a consultant WOULD PROBABLY create list views for field updates per Object.

Workflow Conventions

Workflow Email Alerts

Workflow Rules (along with Process Builders) are now on a deprecation / End-of-Life plan. Existing Workflow Rules will continue to operate for the foreseeable future, but in the near future (Winter 23) Salesforce will begin to prevent creating new Workflow Rules. Establish a plan to migrate to Flows, and create any new automation using Flow Builder.

Email Alerts are NOT part of the Workflow Rule deprecation plan- you can and should continue to configure and use Email Alerts. Flows can reference and execute these Email Alerts

  1. A Workflow Email Alert MUST Start with EA, followed by a number corresponding to the number of email alerts on the triggering Object.

  2. A Workflow Email Alert SHOULD contain the Object name, or an abbreviation thereof, in the Field Update Name.

  3. A Workflow Email Alert's Unique Name and Description SHOULD contain the exact same information, except where a longer description is absolutely necessary.1

  4. A Workflow Email Alert SHOULD be named after the type of email it sends, or the reason the email is sent.

    Note that declaratively, the Name of the template used to send the email is always shown by default in Email Alert lists.

Examples

Object EA Name Description
Invoice EA01_Inv_SendFirstPaymentReminder EA01_Inv_SendFirstPaymentReminder.
Invoice EA02_Inv_SendSecondPaymentReminder SendSecondPaymentReminder
Contact EA03_Con_SendBirthdayEmail EA03_Con_SendBirthdayEmail

1 Email Alert's Unique Names are generated from the Description by default in Salesforce. As Email Alerts can only send emails, this convention describes a less exhaustive solution than could be, at the profit of speed while creating Email Alerts declaratively.

Workflow Conventions

Workflow Tasks

Workflow Rules (along with Process Builders) are now on a deprecation / End-of-Life plan. Existing Workflow Rules will continue to operate for the foreseeable future, but in the near future (Winter 23) Salesforce will begin to prevent creating new Workflow Rules. Establish a plan to migrate to Flows, and create any new automation using Flow Builder.

  1. A Workflow Task Unique Name MUST Start with TSK, followed by a number corresponding to the number of tasks on the triggering Object.

  2. A Workflow Task Unique Name COULD contain the Object name, or an abbreviation thereof, in the Field Update Name. This is to avoid different conventions for Workflow Actions in general.

    Most information about tasks are displayed by default declaratively, and creating a task should rarely impact internal processes or external processes in such a manner that urgent debugging is required. As Users will in all cases never see the Unique Name of a Workflow Task, it is not needed nor recommended to norm them more than necessary.

Workflow Conventions

Workflow Outbound Messages

Workflow Rules (along with Process Builders) are now on a deprecation / End-of-Life plan. Existing Workflow Rules will continue to operate for the foreseeable future, but in the near future (Winter 23) Salesforce will begin to prevent creating new Workflow Rules. Establish a plan to migrate to Flows, and create any new automation using Flow Builder.

Outbound Messages are NOT part of the Workflow Rule deprecation plan- you can and should continue to configure and use Outbound Messages when appropriate. Flows can reference and execute these Outbound Messages

  1. An Outbound Message Name MUST Start with OM, followed by a number corresponding to the number of outbound messages on the triggering Object.
  2. An Outbound Message Name COULD contain the Object name, or an abbreviation thereof, in the Field Update Name. This is to avoid different conventions for Workflow Actions in general.

  3. An Outbound Message MUST be named after the Service that it send information to, and then information it sends in the most concise manner possible.

  4. The Description of An Outbound Message SHOULD give precise information on why the Outbound Message is created.

  5. Listing the fields sent by the Outbound Message is NOT RECOMMENDED.

Examples

Object EA Name Description
Invoice OM01_Inv_SendBasicInfo Send the invoice header to the client software.
Invoice OM02_Inv_SendStatusPaid Sends a flag that the invoice was paid to the client software.
Contact OM01_SendContactInfo Sends most contact information to the internal Directory.

Validation Rule Conventions

Conventions about validation rules, naming, creation, etc

Validation Rule Conventions

Validation Rule Metadata Conventions

  1. The Validation Rule Name MUST try to explain in a concise manner what the validation rule prevents. Note that conciseness trumps clarity for this field.
  2. All validation Rules API names MUST be written in PascalCase.
  3. Validation Rules SHOULD NOT contain an underscore in the fields name, except where explicitly defined otherwise in these conventions.
  4. A Validation Rule SHALL always start by a shorthand of the object name (example: ACC, then the string  VR, followed by a number corresponding to the number of validation rules on the triggering Object, followed by an underscore.
  5. The Validation Rule Error Message MUST contain an error code indicating the number of the Validation Rule, in the format [VRXX], XX being the Validation Rule Number.1
  6. Validation Rules MUST have a description, where the description details the Business Use Case that is addressed by the VR. A Description SHALL NOT contain technical descriptions of what triggers the VR - the Validation Rule itself SHOULD be written in such a manner as to be clearly readable.

1 While including an error code in a user displayed message may be seen as strange, this will allow any admin or consultant to find exactly which validation rule is causing problems when user need only communicate the end code for debugging purposes.

Validation Rule Conventions

Validation rules writing conventions

  1. All Validation Rules MUST contain a Bypass2 Rule check.
  2. Wherever possible, a Consultant SHOULD use operators over functions.

  3. All possible instances of IF() SHOULD be replaced by CASE()

  4. Referencing other formula fields should be avoided at all cost.

  5. In all instances, ISBLANK() should be used instead of ISNULL, as per this link.

  6. Validation Rules MUST NOT be triggered in a cascading manner.1

    Examples

Name Formula Error Message Description
OPP_VR01_CancelReason

!$Setup.Bypasses__c.IsBypassVR__c && TEXT(Cancellationreason__c)="Other" && ISBLANK(OtherCancellationReason__c)

If you select "other" as a cancellation reason, you must fill out the details of that reason. [OPP_VR01] Prevents selecting "other" less reason without putting a comment in. [OPP_VR01]
OPP_VR02_NoApprovalCantReserve !$Setup.Bypasses__c.IsBypassVR__c && !IsApproved__c && ( ISPICKVAL(Status__c,"Approved - CC ") || ISPICKVAL(Status__c,"Approved - Client") || ISPICKVAL(Status__c,"Paid") ) The status cannot advance further if it is not approved. [OPP_VR02] The status cannot advance further if it is not approved. [OPP_VR02]

1 Cascading Validation Rules are defined as VRs that trigger when another VR is triggered. Example: A field is mandatory if the status is Lost, but the field cannot contain less than 5 characters. Doing two validation rules which would trigger one another would result in a user first seeing that the field is mandatory, then saving again, and being presented with the second error. In this case, the second error should be displayed as soon as the first criteria is met.

2 See main Bypasses page for more info on the topic

Bypasses

We reference "bypasses" in a number of these conventions.

Bypasses are administrator-defined checkboxes that allow you to massively deactivate automations and validation rules with a simple click. They avoid that awkward feeling when you realize that you need to turn them off one by one so you can do that huge data update operation you're planning.

If your validation rules bypasses look like this, this is bad:
$Profile.id <> '00eo0000000KXdC' && somefield__c = 'greatvalue'

The most maintainable way to create a bypass is to create a hierarchical custom setting to store all the bypass values that you will use. This means that the custom setting should be named "Bypasses", and contain one field per bypass type that you want to do.

Great-looking bypass setup right there

This setup allows you to create bypasses either by profile or by user, like so:

This allows you to reference bypasses directly in your validation rules by simply letting Salesforce handle knowing whether or not the bypass is active or not for that specific user or profile. In the validations themselves, using this bypass is as easy as referencing it via the formula builder. An example for validation rules could be:

!$Setup.Bypasses__c.IsBypassVR__c && Name = "test"

As you can also see in the above screenshots, you can use a single custom setting to host all the bypasses for your organization, including Validation Rules, Workflows, Triggers, etc. As additional examples, you can see that "Bypass Emails" and "Bypass Invoicing Process" are also present in the custom setting - adding the check for these checkboxes in the automations that trigger emails, and automations that belong to Invoicing respectively, allow partial deactivation of automations based on criteria.

Data Migration Best Practices

An attempt to help you not delete your production database

Data Migration Best Practices

1 - Data Migrations Checklist

The following is a semi-profanity-ridden attempt at explaining one way to do data migrations while following best practices. It is rather long and laced with colorful language. If you have read it already, or if you want to avoid the profanity, you can consult the following checklist in the beautiful table below.

Note that all elements are considered mandatory.

As a quick note, and a reminder even if you've read the whole version:

DO NOT MODIFY SOURCE DATA FILES, EVER.

If you're doing data migrations, either use a script to modify the source files and save the edited version, or use excel workbooks that open the source file and then save the edited result elsewhere. Yes, even if the source is an excel file.

Why? Because sources change. People forget stuff, files aren't well formatted, shit gets broken, and people are human - meaning that one-time data import is actually going to be done multiple times. Edit the source file, and get to do everything all over again. Use scripts or workbooks to do the transformations ? Point that to the new source file and BAM Bob's your uncle.

Scripts you might want to use:

Or, if you prefer excel, open a blank workbook, Import the source file via the "data" ribbon tab, select "from text/csv" (or whatever matches based on your source type), then save it as both:

That way when you change the source file you can just open the construction book again and resave.

Action Completed?
DO YOU HAVE A BACKUP  
Is it UTF-8 encoded  
Did you check it is readable and well formatted  
Does it have carriage returns stored as carriage returns, not as spaces  
Is it up to date  
Do you have a mapping for every object and field  
Did you determine an ExternalID for each object  
Did you determine source of truth (whether to overwrite or not) for reach field  
Did the client sign off on the mapping  
Do you have the source data  
Is it in a format your tool can read  
Are dates and date-times well formatted (yyyy-mm-dd || yyyy-mm-ddT00:00:00z) and are times exported in UTC  
Are field lengths respected (emails not longer than 80 chars, Names not longer than 40, etc)  
Do numbers have the right separators  
Do all tables have the required data for loading (Account Name, Contact Last Name, etc etc etc)  
Do all fields that have special characters or carriage returns have leading and trailing qualifiers (")  
Do all records have an external Id  
Did you do a dummy load with only one field mapped to make your sure tool can read the entire file  
Are you doing transformations  
Did you document them all  
Did you automate them so you can run them again with a click  
Did you read the LDV guide if you are loading more than 1M records  
Did you activate validation rules bypass  
Did you check all automations to deactivate any that should be, including email alerts  
Did you warn the client about when you would do the data load  
Did you warn the client about how long the data load would take  
--------- run the migration -----------  
Did you reactivate all automations  
Did you remove validation rule bypass  
Did you tell the client you were done and they could check  
Did you check the quality of the data  
Data Migration Best Practices

2 - Data Migration Step-by-step - Before Loading

Introduction


You're going to have to map data from various sources into Salesforce. IT'S THAT BIG MIGRATION TIME.

Well let's make sure you don't have to do it again in two days because data is missing, or delete production data.

Salesforce does not back up your data.

If you delete your data, and the amount deleted is bigger than what is in the recycle bin, if will be deleted forever. You could try restoring it via Workbench, praying that the automated Salesforce jobs haven't wiped your data yet.
If you update data, the moment the update hits the database (the DML) is done, the old data is lost. Forever.

If you don't have a backup, you could try seeing if you turned on field history.

If worst comes to worst you can pay 10 000€ (not joking, see here) to Salesforce to restore your data. Did I mention that Salesforce would give you a CSV extract of the data you had in Salesforce ? Yeah they don't restore the org for you. You'd still need to restore it table per table with a data loading tool.

But let's try to avoid these situations, by following these steps. These steps apply to any massive data load, but especially in case of deletions.

GENERAL DATA OPERATIONS STUFF

Tools

Do not use Data Loader if you can avoid it. If you tried doing a full data migration with Dataloader, you will not be helped. By this I mean I will laugh at you and go back to drinking coffee. Dataloader is a BAD tool.


Amaxa is awesome and handles objects that are related to one another. It's free and awesome.
Jitterbit is like Dataloader but better. It's free. It's getting old though, and some of the newer stuff won't work like Time fields.
Talend requires some tinkering but knowing it will allow you to migrate from almost anything, to almost anything.
Hell you can even use SFDX to do data migrations.

But yeah don't use dataloader. Even Dataloader.io is better, and that's a paid solution. Yes I would recommend you literally pay rather than use Dataloader.

If you MUST use dataloader, EXPORT THE MAPPINGS YOU ARE DOING. You can find how to do so in the data loader user guide: https://developer.salesforce.com/docs/atlas.en-us.dataLoader.meta/dataLoader/data_loader.htm

Even if you think you will do a data load only once, the reality is you will do it multiple times. Plus, for documentation, having the mapping file is best practice anyway. Always export the mapping, or make sure it is reusable without rebuilding it, whatever the tool you use.

 
Volume

If you are loading a big amount of data or the org is mature, read this document entirely before doing anything. LDV starts at a few million records in general, or several gigabytes of data. Even if you don't need this right now, reading it should be best practice in general.

Yes, read the whole thing. The success of the project depends on it, and the document is quite short.

 

Deletions

If you delete data in prod without a backup, this is bad.
If the data backup was not checked, this is bad.
If you did not check automations before deleting, this is also bad.

Seriously, before deleting ANYTHING, EVER:

 

Data Mapping

For Admins or Consultants: you should avoid mapping the data yourself. Any data mapping you do should be with someone from the end-user's who can understand you are saying. If no one like this is available, spend time with a business operative so you can do the mapping and make them sign off on it.

The client signing off on the mapping is drastically important, as this will impact the success of the data load, AND what happens if you do not successfully load it - or if the client realizes they forgot something.

Basic operations for a data mapping are as follow:

 

Data retrieval

Data needs to be extracted from source system. This can be via API, an ETL, a simple CSV extract, etc. Note that in general it is better if storing data as CSV can be avoided - ideally you should do a point-to-point load which simply transforms the data - but as most clients can only extract csv, the following best practices apply:

 

Data Matching

You should already have created External Ids on every table, if you are upserting data.
If not, do so now.
DO NOT match the data in excel.

Yes, INDEX(MATCH()) is a beautiful tool. No, no one wants you to spend hours doing that when you could be doing other stuff, like drinking a cold beer.

If you're using VLOOKUP() in Excel, stop. Read up on how to use INDEX(MATCH()). You will save time, the results will be better, and you will thank yourself later. Only thing to remember is to always add "0" as a third parameter to "MATCH" so it forces exact results.

Store IDs of the external system in the target tables, in the ExternalId field. Then use that when recreating lookup relationships to find the records.

This saves time, avoids you doing a wrong matching, and best of all, if the source data changes, you can just run the data load operation again on the new file, without spending hours matching IDs.

Data Migration Best Practices

3 - Data Migration Step-by-step - Loading

FIRST STEPS

  1. Login to Prod. Is there a weekly backup running, encoded as UTF-8, in Setup > Data Export
    • Nope
      Select encoding UTF-8 and click "Export Now". This will take hours.
      Turn that weekly stuff on.
      Make sure the client KNOWS it's on.
      Make sure they have a strategy for downloading the ZIP file that is generated by the extract weekly.
    • Yup
      • Is it UTF-8 and has run in the last 48 hours ?
        • Yup
          Confer with the client to see if additional backup files are needed. Otherwise, you're good.
        • Nope
          If the export isn't UTF-8, it's worthless.
          If it's more than 48h old, confer with the client to see if additional backup files are needed. In all cases, you should consider doing a new, manual export.

          SERIOUSLY MAKE SURE YOU CHANGE THE ENCODING. Salesforce has some dumb rule of not defaulting to UTF-8. YOU NEED UTF-8. Accents and ḍîáꞓȑîȶîꞓs exist. Turns out people like accents and non-roman alphabets, who knew?

      • If Data Export is not an option because it has run too recently, or because the encoding was wrong, you can also do your export by using whatever too you want to Query all the relevant tables. Remember to set UTF-8 as the encoding on both export and import.
  2. Check the org code and automation
    • Seriously, look over all triggers that can fire when you upload the data.
      You don't want to be that consultant that sent a notification email to 50000 people.
      Just check the triggers, WFs, PBs, and see what they do.
      If you can't read triggers, ask a dev to help you.
      Yes, Check the Workflows and Process Builders too. They can send Emails as well.
    • Check Process Builders again. Are there a lot that are firing on an object you are loading ? Make note of that for later, you may have to deactivate them.
  3. Check data volume.
    • Is there enough space in the org to accommodate the extra data ? (this should be pre-project checks, but check it again)
    • Are volumes to load enough to cause problems API-call wise ?
      If so, you may need to consider using the BULK jobs instead of normal operations
    • In case data volumes are REALLY big, you will need to abide by LDV (large data volume) best practices, including not doing upserts, defering sharing calculations, and grouping records by Parent record and owner before uploading. Full list of these is available in the pdf linked above and here.

 

PREPARING THE JOBS


Before creating a job, ask yourself which job type is best.

Upsert is great but is very resource intensive, and is more prone to RECORD_LOCK than other operation types. It also takes longer to complete.
Maybe think about using the BULK Api.
In all cases, study what operation you do and make sure it is the right one.
Once that is done...

You are able to create insert, upsert, query and deletion jobs, and change select parts of it. That's because you are using a real data loading tool.

This is important because this means you can:

If something fails, you correct the TRANSFORMATION, not the file, except in cases where it would be prohibitively long to do so. Meaning if you have to redo the load, you can run the same scripts you did before to have a nice CSV to upload.

 

GETTING READY TO DO THAT DATA OPERATION

This may sound stupid but warn your client, the PM, the end users that you're doing a data load. There's nothing worse than losing data or seeing stuff change without knowing why. Make sure key stakeholders are aware of the operation, the start time, and the estimated end time. Plus, you need them to check the data afterwards to ensure it's fine.


You've got backups of every single table in the Production org.
Even if you KNOW you do, you open the backups and check they are not corrupt or unreadable. Untested backups are no backups.
You know what all automations are going to do if you leave them on.
You talked with the client about possible impacts, and the client is ready to check the data after you finish your operations.
You set up, with the client, a timeframe in which to do the data operation.
If the data operation impacts tables that users work on normally, you freeze all those users during that timeframe.

Remember to deactivate any PB, WF, APEX that can impact the migration. You didn't study them just to forget them.

If this is an LDV job, take into account any considerations listed above.

 

DATA OPERATION

  1. Go to your tool and edit the Sandbox jobs.
  2. Edit the job Login to point to production
  3. Save all the jobs.
  4. You run, in order, the jobs you prepared.

When the number of failures is low enough, study the failure files, take any corrective action necessary, then use those files as a new source for a new data load operation.

Continue this loop until the number of rejects is tolerable.

This will ensure that if some reason you need to redo the entire operation, you can take the same steps in a much easier fashion.

Once you are done, take the failure files, study them, and prepare a recap email detailing failures and why they failed. It's their data, they have a right to know.

 

POST-MIGRATION

Go drink champagne.

 

IF SHIT DOESN'T LOOK RIGHT

You have a backup. Don't panic.

Getting the right (number of) Admins


Salesforce Success Services
Achieve Outstanding CRM Administration

Because Salesforce takes care of many traditional administration tasks, system administration is easier than ever before. Setting up, customizing the application, training users, and “turning on” the new features that become available with each release—all are just a few clicks away. The person responsible for these tasks is your Salesforce CRM administrator. Because this person is one of the most important resources in making your implementation a success, it’s important to carefully choose your administrator and to continually invest in his or her professional development. You can also choose to have Salesforce handle administrator tasks for you.

Note: Larger enterprise implementations often use a role called Business Analyst or Business Application Manager as well, particularly for planning the implementation and ensuring adoption once the solution is live. Although the most common customization tasks don’t require coding, you may want to consider using a professional developer for some custom development tasks, such as writing Force.com code (Apex), developing custom user interfaces with Force.com pages (Visualforce), or completing complex integration or data migration tasks.
In many ways, the administrator fills the role played by traditional IT departments: answering user questions, working with key stakeholders to determine requirements, customizing the application to appeal to users, setting up reporting and dashboards to keep managers happy, keeping an eye on availability and performance, activating the features in new releases, and much more. This paper will help you to make important choices when it comes to administering your Salesforce CRM application, including:
Finding the right person(s)
Investing in your administrator(s)
Providing adequate staffing
Getting help from Salesforce
Find the right administrator
Who would make an ideal Salesforce CRM administrator? Experience shows that successful administrators can come from a variety of backgrounds, including sales, sales operations, marketing, support, channel management, and IT. A technical background may be helpful, but is not necessary. What matters most is that your administrator is thoroughly familiar with the customization capabilities of the application and responsive to your users. Here are some qualities to look for in an administrator:
A solid understanding of your business processes
Knowledge of the organizational structure and culture to help build relationships with key groups
Excellent communication, motivational, and presentation skills
The desire to be the voice of the user in communicating with management
Analytical skills to respond to requested changes and identify customizations
Invest in your administrator
Investing in your administrator will do wonders for your Salesforce CRM solution. With an administrator who is thoroughly familiar with Salesforce CRM, you’ll ensure that your data is safe, your users are productive, and you get the most from your solution.
Salesforce offers both self-paced training and classroom training for administrators. For a list of free, self-paced courses, go to Salesforce Training & Certification. To ensure that your administrator is fully trained on all aspects of security, user management, data management, and the latest Salesforce CRM features, enrol your administrator in Administration Essentials (ADM201). The price of this course includes the cost of the certification that qualifies your administrators to become Salesforce.com Certified Administrators. For experienced administrators, Salesforce offers the Administration Essentials for Experienced Admins (ADM211) course.

Providing adequate staffing
The number of administrators (and, optionally, business analysts) required depends on the size of your business, the complexity of your implementation, the volume of user requests, and so on. One common approach for estimating the number of administrators you need is based on the number of users.

Number of users Administration resources
1 – 30 users < 1 full-time administrator
31 – 74 users 1+ full-time administrator
75 – 149 users 1 senior administrator; 1 junior administrator
140 – 499 users 1 business analyst, 2–4 administrators
500 – 750 users 1–2 business analysts, 2–4 administrators
> 750 users Depends on a variety of factors




In addition to the user base, also consider the points below:
In small businesses, the role of the administrator is not necessarily a full-time position. In the initial stages of the implementation, the role requires more concentrated time (about 50 percent). After go-live, managing Salesforce CRM day to day requires much less time (about 10–25 percent)
If you have several business units that use Salesforce CRM solutions—such as sales, marketing, support, professional services, and so on—consider using separate administrators for each group, to spend between 50–100 percent of their time supporting their solutions.
Another common practice for large implementations is to use “delegated administrators” for specific tasks such as managing users, managing custom objects, or building reports.
If you operate in multiple geographic regions, consider using one administrator for each major region, such as North America, EMEA, and APAC. To decide how to classify regions, consider whether they have a distinct currency, language, business processes, and so on, and train your administrators in the multicurrency and multilanguage features. Also appoint a lead analyst or administrator who will coordinate the various regions.
If you need customization beyond the metadata (click not code) capabilities of Salesforce CRM or want to develop new applications, you may also need a developer to create, test, and implement custom code.

https://help.salesforce.com/HTViewSolution?id=000007548

ARCHIVED - Process Builder Conventions

Process Builder is old, decrepit, and deprecated. You can't create new ones, and if you're editing old ones you should be migrating to Flows instead. This is ARCHIVED content, will never be updated, and is here for history reasons.

ARCHIVED - Process Builder Conventions

ARCHIVED - Process Builder Bypass

Process Builder is old, decrepit, and deprecated.
You can't create new ones, and if you're editing old ones you should be migrating to Flows instead.
This is ARCHIVED content, will never be updated, and is here for history reasons.

Normally we put bypasses in everything (workflows, validation rules, etc). Process builders especially are interesting to bypass because they're still SLOW AS HELL and they can be prone to unforeseen errors - specifically during data loads.


Plus if you have process builders sending emails you probably want to skip them when you're loading data massively.
A few years ago I didn't find a solution that suited me. A yea or so they activated systems labels for PB, so you can search for the custom setting like in WF - but you couldn't go next element, so you had to add the bypass, in formula mode, to every element. Taxing and costly in hours, plus you had to use formulas to everything.


Here you set it once, in every TPB, and then you have a working bypass for every process builder ever. Low cost, easy to maintain, and allows deactivation on mass loads or other operations where you don't want those things firing.


Ok so there's the usual, recommended Bypass Custom setting that I write about in my best practices. I added a PB there

I created a notification type

which then allows you to do this:

I would rather it's "no action" but that doesn't exist. so in the meantime, this:

ARCHIVED - Process Builder Conventions

ARCHIVED - Process Builder Structural Conventions

Process Builder is old, decrepit, and deprecated.
You can't create new ones, and if you're editing old ones you should be migrating to Flows instead.
This is ARCHIVED content, will never be updated, and is here for history reasons.

General Conventions

1. If there are APEX triggers firing on an object, Process Builder SHOULD NOT be used. *1
2. If Process Builders existed before building the APEX triggers, the Process Builders SHOULD be
replaced by APEX triggers and classes.
3. Process Builders REALLY SHOULD NOT fire on, update, or otherwise reference, Person Accounts.
4. Process Builders REALLY SHOULD NOT perform complex operations on records that can be
massively inserted/updated as a routine part of organization usage.
5. Process Builders MUST NOT call a Flow if firing on an object that can be massively
inserted/updated as a routine part of organization usage.
6. Process Builders execution SHOULD be limited to the exact cases where they are needed In
all cases, a consultant SHOULD limit the number of process builders executing on an object.

Structural Conventions

1. Generally, a consultant SHOULD build Invocable Process Builders, and Invoke them from one
single Process on the triggering Object.
❍ This is by opposition to creating one process builder by task.
❍ Invocable process builders cannot be used to trigger time-dependent actions, meaning you will probably end up with:

2. Process Builders generally SHOULD NOT use the "no criteria" option of the Decision Diamonds. There is always at least one sanity check to do
3. Whenever possible, multiple Process Builders on an object should be migrated to a single
Process Builder, with different actions evaluated one after the other. This is now officially
mandated by Salesforce.

*1 This is a best practice, but it should be noted that for smaller organizations, triggers
and process builders may coexist on the same objects.

 

ARCHIVED - Process Builder Conventions

ARCHIVED - Process Builder Naming Conventions

Process Builder is old, decrepit, and deprecated.
You can't create new ones, and if you're editing old ones you should be migrating to Flows instead.
This is ARCHIVED content, will never be updated, and is here for history reasons.

  1. A Process Builder name SHALL always start by PB, followed by a number corresponding to the number of process builders in the Organization, followed by an underscore.
    a. If the Process Builder Triggers other Process Builders, it SHALL always start by TPB instead.
    b. If the Process Builder is Invoked by other Process Builders, it SHALL always start by IPB instead.
  2. The end of a Process Builder name SHOULD always be:
    • the name of the object, in the case of a Triggering Process Builder (TPB)
    • the action carried out, in the case of an Invoked Process Builder (IPB)
    • the trigger and action, in the case of a standalone Process Builder (PB)
  3. A Process Builder name COULD contain either C, CE, or CES wrapped by underscores, to show if the PB triggers on Creation, Creation and Edition, or Subsequent Modifications that Fill Criteria. The default assumed setting is CE if none is written. *3
  4. All Process Builder Triggers MUST have a description detailing their purpose.
  5. A Process Builder Decision Diamond SHALL be named after the criteria that are used in the most precise manner possible.
  6. A Process Builder Action SHALL be named after the action being carried out in the most precise manner possible.
Type Name Description
Process Builder TPB01_Opportunity This Process Builder invokes all invocable Opportunity Process builders
Process Builder IPB01_SetOwnerTarget Copies over target from Owner to calculate monthly efficiency
Process Builder PB01_ContactBirthdatEmail Sends a birthday email on the contact’s birthday.
Decision Diamond Status is “Approved” #N/A
Action Sets Contact Scoring to 10 #N/A
Process Builder (possible variation) TPB01_Opportunity This Process Builder invokes all invocable Opportunity Process builders. Also Handles various actions such as birthday emails.

Big Objects

Big objects are Salesforce's take on NoSQL (although it works just like common SQL). It allows large data storage on Salesforce's servers. Ideal for Big Data and compliance scenarios.

Big Objects

Sample scenario - store all field changes for an object

In this scenario, the customer wants, for whatever reason, to track changes of all the fields in a single record. Salesforce provides the default field tracking, but it is available for only twenty fields per object. If this object we are talking about has more, then it is impossible to solve it with the standard, declarative tools.

Big objects are the ideal candidate for this, because we are talking about data that users probably don't need to report (big objects do not support reporting), there's a change that it is a lot of data (if the record is changed frequently), and possibly there's a legal reason for keeping those changes stored (compliance).

So to do that we'll need a trigger on the object, running preferably on the after update trigger event. At this point the record is already saved but the transaction is not yet committed to the database, so the changes were made and we get the difference using Trigger.oldMap to get the old version of the changed records.

After iterating through all the fields on the object, we check for differences, and for each one we instantiate a new big object. When the iteration ends we insert them immediately (using Database.insertImmediate()).

In this configuration, the big object's index would be the related record's Id, the field that was modified and the date/time stamp of the change (depending on requirements, one might want to spend some time thinking if it is best to have the timestamp before the field name). This way, if we wanted to display the data in a Lightning Component, for example, we could query the specific record data synchronously in Apex because of the indices created:

SELECT 
    RecordId__c,
    Field__c,
    Timestamp__c,
    OldValue__c,
    NewValue__c
FROM ObjectHistory__b
WHERE RecordId__c = :theRecordId

 

Mass Update Access to Objects And Fields For Profiles And Permission Sets

If you need to update Object-level permissions (CRED) or Field level Permissions (FLS) for a large number of Objects, Fields, Profiles, or Permission Sets, rather than manually clicking dozens of checkboxes on multiple pages, it is sometimes easier and faster to make those updates using tools like Data Loader. This article describes how to make those updates, as well as relevant information about the data model regarding FLS and CRED. It is intended for declarative developers.

Mass Update Access to Objects And Fields For Profiles And Permission Sets

Object Permissions - Basic Functionality

When dealing with Profiles and CRED there are three objects involved:

Note: Every Profile has a corresponding child PermissionSet record, as indicated by the ProfileId field on the PermissionSet record. When dealing with Permission Sets, the Profile object doesn’t factor in.

For every combination of Profile and Object, there is a corresponding ObjectPermissions record with six boolean fields that control the access level for that Profile to that object. The same goes for Permission Sets. Those six fields are:

Note: If a Profile or Permission Set has no access to an object, then there is no ObjectPermissions record for that object/profile combination. You cannot have an ObjectPermissions record where all “permissions” fields are FALSE.

In addition to these boolean fields, there are two other uneditable fields which indicate which object the record is related to (sObjectType), as well as the related Permission Set (ParentId). Remember, even if the ObjectPermissions record is controlling access for a Profile, it will be related to a Permission Set. That Permission Set will have the Id of the corresponding Profile in the ProfileId field.

When a Profile or Permission Set is granted access to an Object, Salesforce automatically creates a new ObjectPermissions record. When access to that Object is removed, Salesforce deletes that record.

Mass Update Access to Objects And Fields For Profiles And Permission Sets

Field Permissions - Basic Functionality

Field-Level Security works very similarly to Object-Level Permissions. When dealing with Profiles and FLS, there are three objects involved:

  • Profile object
  • PermissionSet object
  • FieldPermissions object

Note: Every Profile has a corresponding child PermissionSet record, as indicated by the ProfileId field on the PermissionSet record. When dealing with Permission Sets, the Profile object doesn’t factor in.

For every combination of Profile and Field, there is a corresponding FieldPermissions record. Each record has two boolean fields that control the access level for that Profile to that field. The same goes for Permission Sets. Those two fields are:

  • PermissionsEdit
  • PermissionsRead

Note: If a Profile or Permission Set has no access to a Field, then there is no FieldPermissions record for that Field/Profile combination. You cannot have a FieldPermissions record where all “permissions” fields are FALSE.

In addition to these boolean fields, there are three other uneditable fields which indicate which Object the record is related to (sObjectType), which specific Field this record controls access to (Field), and the related Permission Set (ParentId). Remember, even if the FieldPermissions record is controlling access for a Profile, it will be related to a Permission Set. That Permission Set will have the Id of the corresponding Profile in the ProfileId field.

When a Profile or Permission Set is granted access to a Field, Salesforce automatically creates a new FieldPermissions record. When access to that Field is removed, Salesforce deletes that record.

Mass Update Access to Objects And Fields For Profiles And Permission Sets

Query CRED And FLS Permissions - Examples

Query All Permissions

To get a list of every CRED setting for every Profile and Permission Set in Salesforce run the following query, or use Data Loader to export all ObjectPermissions records with the following fields:

SELECT Id, ParentId, Parent.ProfileId, Parent.Profile.Name, SobjectType, PermissionsCreate, PermissionsDelete, PermissionsEdit, PermissionsRead, PermissionsViewAllRecords, PermissionsModifyAllRecords
FROM ObjectPermissions

To query all Field permissions use a similar query:

SELECT Id, ParentId, Parent.ProfileId, Parent.Profile.Name, SobjectType, Field, PermissionsEdit, PermissionsRead
FROM FieldPermissions

In order to limit your search to specific profiles, add a filter to the end using the Parent.ProfileId field . Example:

SELECT Id, (...)
FROM (...)
WHERE Parent.Profile.Name = 'Sales Manager'

Or if you have a list of profiles:

SELECT Id, (...)
FROM (...)
WHERE Parent.Profile.Name IN ('Sales Manager', 'Sales', 'Marketing')

To limit your query to only see permissions related to Profiles and not Permission Sets, add a filter to the end using the Parent.ProfileId field to make sure it’s not empty:

SELECT Id, (...)
FROM (...)
WHERE Parent.ProfileId != null

Conversely, to limit your query to only show permissions related to Permission Sets, adjust the filter:

WHERE Parent.ProfileId = null

In order to limit the Objects you want permissions for, add a filter to the end using the SobjectType field. Example:

SELECT Id, (...)
FROM (...)
WHERE SobjectType IN ('Account','Opportunity','Contact')
Query Permissions For Specific Fields

In order to limit the Fields you want permissions for, add a filter to the end using the Field field. Note that the values in the Field field include API name of the Object, followed by a period, then the API name of the Field. Example:

SELECT Id, (...)
FROM (...)
WHERE Field IN ('Account.Customer__c','Opportunity.Total__c','Contact.LastName')
Mass Update Access to Objects And Fields For Profiles And Permission Sets

Updating, Deleting, and Adding Permissions

After running your query you will have a table describing access for all objects/fields where at least one profile or permission set has some kind of access. This is an important concept to understand. If no Profiles or Permission Sets have access to an Object or Field, there will not be a record for that object/field.

For existing ObjectPermissions/FieldPermissions records, you can make updates to the TRUE and FALSE values in each column, then use Data Loader to upload the changes using the Update feature.

To remove all access to an Object/Field, you will need to use the Delete feature in Data Loader to delete the appropriate ObjectPermissions/FieldPermissions records, using a list of Ids.

To add access where there is none, you will need to use the Insert feature in Data Loader to create new ObjectPermissions/FieldPermissions records.

To data load ObjectPermissions records, include the following fields:

  • sObjectType
  • ParentId
  • PermissionsCreate
  • PermissionsDelete
  • PermissionsEdit
  • PermissionsRead
  • PermissionsViewAllRecords
  • PermissionsModifyAllRecords

To data load FieldPermissions records, include the following fields:

  • sObjectType
  • Field
  • ParentId
  • PermissionsEdit
  • PermissionsRead
Mass Update Access to Objects And Fields For Profiles And Permission Sets

Important Notes

General
  • Upserts are generally not recommended due to the extremely slow speed. It will most likely take much longer to make the upsert than it would to split the records into separate Insert and Update files.
  • As stated above, you cannot have an ObjectPermissions or FieldPermissions record where all “permissions” fields are FALSE. If you try to update or insert one, you will get an error. Instead, to remove all access to an object, you have to delete the ObjectPermissions record.
  • Custom Settings and Custom Metadata Types don’t have ObjectPermissions records related to them. Trying to insert or update them will just return an error.
  • Watch out for permissions dependencies. When updating permission using the Profile edit page for example, Salesforce will automatically enable dependent permissions when needed. When data loading permissions, Salesforce will not automatically update user or system permissions on the profile if you try to update an object permission that has a dependency. Instead the update or insert will fail and you will get an error on that row. Accounts in particular have a large number of dependencies. Example:
FIELD_INTEGRITY_EXCEPTION: Permission Convert Leads depends on permission(s): Create Account; Permission Read All Asset depends on permission(s): Read All Account; Permission Read All Contract depends on permission(s): Read All Account; Permission Read All Dsx_Invoice__c depends on permission(s): Read All Account; Permission Read All Orders__c depends on permission(s): Read All Account; Permission Read All OrgChartPlus__ADP_OrgChartEntityCommon__c depends on permission(s): Read All Account; Permission Read All OrgChartPlus__ADP_OrgChart__c depends on permission(s): Read All Account; Permission Read All Partner_Keyword_Mapping__c depends on permission(s): Read All Account; Permission Read All Zuora__CustomerAccount__c depends on permission(s): Read All Account
  • Additionally, keep in mind what is required at the Object level when setting certain permissions. For example, all levels of access (Edit, Create, etc..) require Read access. Delete access requires Read as well as Edit. Modify All requires all levels of access except Create. Salesforce will not allow you to data load permissions with illegal combinations of CRED access.
  • When using SOQL to query object permissions, be aware that some object permissions are enabled because a user permission requires them. The exception to this rule is when “Modify All Data” is enabled on the Profile or Permission Set (note: not to be confused with the "Modify All" CRED permission). While it enables all object permissions, it doesn’t physically store any object permission records in the database. As a result, unlike object permissions that are required by a user permission - such as “View All Data” or “Import Leads” - the query still returns permission sets with “Modify All Data,” but the object permission record will contain an invalid ID that begins with “000”. This ID indicates that the profile has full access due to “Modify All Data” and the object permission record can’t be updated or deleted.
  • To remove full access from these objects, disable “Modify All Data” at the Profile level, and then delete the resulting object permission record.
Resources

Object Permissions:
https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_objects_objectpermissions.htm

Field Permissions:
https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_objects_fieldpermissions.htm

Flow Conventions

Naming and structural conventions related to Flows and the Cloud Flow Engine.

Flow Conventions

Flow General Notes

Generalities

As of writing this page, August 10th 2023, Flows are primary source of automation on the Salesforce platform. We left this sentence because the earlier iteration (from 2021) identified that Flows would replace Process Builder and we like being right.

It is very important to note that Flows have almost nothing to do, complexity-wise, with Workflows, Process Builder, or Approval Processes. Where the old tools did a lot of (over)-simplifying for you, Flow exposes a lot of things that you quite simply never had to think about before, such as execution context, DML optimization, batching, variables, variable passing, etc.

So if you are an old-timer upgrading your skills, note that a basic understanding of programming (batch scripting is more than enough) helps a lot with Flow.
If you're a newcomer to Salesforce and you're looking to learn Flow, same comment - this is harder than most of the platform (apart from Permissions) to learn and manipulate. This is normal.

Intended Audience

These conventions are written for all types of Salesforce professionals to read, but the target audience is the administrator of an organization. If you are an ISV, you will have considerations regarding packaging that we do not, and if you are a consultant, you should ideally use whatever the client wants (or the most stringent convention available to you, to guarantee quality).

On Conventions

As long as we're doing notes: conventions are opinionated, and these are no different. Much like you have different APEX trigger frameworks, you'll find different conventions for Flow. These specific conventions are made to be maintainable at scale, with an ease of modification and upgrade. This means that they by nature include boilerplate that you might find redundant, and specify very strongly elements (to optimize cases where you have hundreds of Flows in an organization). This does not mean you need to follow everything. A reader should try to understand why the conventions are a specific way, and then decide whether or not this applies to their org.

At the end of the day, as long as you use any convention in your organization, we're good. This one, another one, a partial one, doesn't matter. Just structure your flows and elements.

On our Notation

Finally, regarding the naming of sub-elements in the Flows: we've had conversations in the past about the pseudo-hungarian notation that we recommend using. To clarify: we don't want to use Hungarian notation. We do so because Flow still doesn't split naming schemes between variables, screen elements, or data manipulation elements. This basically forces you to use Hungarian notation so you can have a var_boolUserAccept and a S01_choiceUserAccept (a variable holding the result of whether a user accepts some conditions, and the presentation in radio buttons of said acceptance), because you can't have two elements just named UserAccept even if technically they're different.

On custom code, plugins, and unofficialSF

On another note: Flow allows you to use custom code to extend its functionality. We define "custom code" by any LWC, APEX Class, and associated that are written by a human and plug into flow. We recommend using as little of these elements as possible, and as many as needed. This includes UnofficialSF.

Whether you code stuff yourself, or someone else does it for you, Custom Code always requires audit and maintenance. Deploying UnofficialSF code to your org basically means that you own the maintenance and audit of it, much like if you had developed it yourself. We emit the same reservations as using any piece of code on GitHub - if you don't know what it does exactly, you shouldn't be using it. This is because any third-party code is not part of your MSA with Salesforce, and if it breaks, is a vector of attack, or otherwise negatively impacts your business, you have no official support or recourse.

This is not to say that these things are not great, or value-adding - but you are (probably) an admin of a company CRM, which means your first consideration should be user data and compliance, and ease of use coming second.

Bonus useless knowledge: Flows themselves are just an old technology that Salesforce released in 2010: Visual Process Manager. That itself is actually just a scripting language: “The technology powering the Visual Process Manager is based on technology acquired from Informavores, a call scripting startup Salesforce bought last year.” (2009) Source

Flow Conventions

What Automation do I create Flowchart

image-1610555495687.png

Flow Conventions

Flow Meta Conventions

Read these Resources first

  1. The official Flows best practice doc. Note we agree on most things. Specifically the need to plan out your Flow first.

  2. The Flows limits doc. If you don't know the platform limits, how can you build around them?

  3. The Transactions limits doc. Same as above, gotta know limits to play around them.

  4. The What Automation Do I Create Flowchart. Not everything needs to be a Flow.
  5. The Record-Triggered Automation Guide, if applicable.

Best Practices

These are general best practices that do not pertain to individual flows but more to Flows in regards to their usage within an Organization.

On Permissions

Flows should ALWAYS execute in the smallest amount of permissions possible for it to execute a task.
Users should also ideally not have access to Flows they don't require.
Giving Setup access so someone can access DeveloperName is bad, and you should be using custom labels to store the ids and reference that instead, just to limit setup access.

Use System mode sparingly. It is dangerous.
If used in a Communities setting, I REALLY hope you know why you're exposing data publicly over the internet or that you're only committing information with no GETs.

Users can have access granted to specific Flows via their Profiles and Permission Sets, which you should really be using to ensure that normal users can't use the Flow that massively updates the client base for example.

Record-Triggered Flows, and Triggers should ideally not coexist on the same object in the same Context.

"Context" here means the APEX Trigger Context. Note that not all of these contexts are exposed in Flow:

- Screen Flows
execute outside of these contexts, but Update elements do not allow you to carry out operations in the before context.
- Record Triggered Flow execute either in before or after contexts, depending on what you chose at the Flow creation screen (they are named "Fast Record Updates" and "Other Objects and Related Actions", respectively, because it seems Salesforce and I disagree that training people on proper understanding of how the platform works is important).

The reason for the "same context" exclusivity is in case of multiple Flows and heavy custom APEX logic: in short, unless you plan explicitly for it, the presence of one or the other forces you to audit both in case of additional development, or routine maintenance.
You could technically leverage Flows and APEX perfectly fine together, but if you have a before Flow and a before Trigger both doing updates to fields, and you accidentally reference a field in both... debugging that is going to be fun.

So if you start relying on APEX Triggers, while this doesn’t mean you have to change all the Flows to APEX logic straight away, it does mean you need to plan for a migration path.

In the case were some automations need to be admin editable but other automations require custom code, you should be migrating the Triggers to APEX, and leveraging sub-flows which get called from your APEX logic.

Flow List Views should be used to sort and manage access to your Flows easily

The default list view is not as useful as others can be.
We generally suggest doing at minimum one list view, and two if you have installed packages that ship Flows:

Flows are considered Code for maintenance purposes

Do NOT create or edit Flows in Production, especially a Record-Triggered flow.
If any user does a data load operation and you corrupt swaths of data, you will know the meaning of “getting gray hairs”, unless you have a backup - which I am guessing you will not have if you were doing live edits in production.

No, this isn't a second helping of our note in the General Notes.
This is about your Flows - the ones you built, the ones you know very well and are proud of.
There are a swath of reasons to consider Flows to be Code for maintenance purposes, but in short:

In short - it's a short and admin-friendly development, but it's still development.

On which automation to create

In addition to our (frankly not very beautiful Flowchart), when creating automations, the order of priority should be:

On APEX and LWCs in Flows

To reiterate, if you install unpackaged code in your organization, YOU are responsible for maintaining it.

Flow Testing and Flow Tests

If at all possible, Flows should be Tested. This isn't always possible because of these considerations, (which aren't actually exhaustive - I have personally seen edge cases where Tests fail but actual function runs, because of the way Tests are build, and I have also seen deployment errors linked to Tests). Trailheads exist to help you get there.

A Flow Test is not just a way to check your Flow works. A proper test should:
- Test the Flow works
- Test the Flow works in other Permission situations
- Test the Flow doesn't work in critical situations you want to avoid [if you're supposed to send one email, you should probably catch the situation where you're sending 5 mil]
... and in addition to that, a proper Flow Test will warn you if things stop working down the line.

Most of these boilerplates are negative bets against the future - we are expecting things to break, people to forget configuration, and updates to be made out of process. Tests are a way to mitigate that.

We currently consider Flow Tests to be "acceptable but still bad", which we expect to change as time goes on, but as it's not a critical feature, we aren't sure when they'll address the current issues with the tool.

Note that proper Flow Testing will probably become a requirement at some point down the line.

On Bypasses

Flows, like many things in Salesforce, can be configured to respect Bypasses.
In the case of Flows, you might want to call these "feature flags".

This is a GREAT best practice, but is generally overkill unless you are a very mature org with huge amounts of processes.


Flow Conventions

Flow Structural Conventions - Common Core

As detailed in the General Notes section, these conventions are heavily opinionated towards maintenance and scaling in large organizations. The conventions contain:

Due to their nature of being triggered by the user and outside of a specific record context, Screen Flows do not require specific structural adaptations at the moment that are not part of the common core specifications.

Common Core Conventions

On System-Level Design

Do not do DMLs or Queries in Loops.

Simpler: No pink squares in loops.

DML is Data Manipulation Language. Basically it is what tells the database to change stuff. DML Operations include Insert, Update, Upsert, and Delete, which you should know from Data Loader or other such tools.

Salesforce now actually warns you when you're doing this, but it still bears saying.

A screenshot indicating a pink element (create records) within a loop. The image is labelled "don't do this".Don't do this

You really must not do this because:
All Pink (DML or Query) elements should have Error handling

Error, or Fault Paths, are available both in Free Design mode and the Auto-Layout Mode. In Free mode, you need to handle all possible other paths before the Fault path becomes available. In Auto-Layout mode, you can simply select Fault Path.

Screen Flow? Throw a Screen, and display what situation could lead to this. Maybe also send the Admin an email explaining what happened.

Screen Flow Error HandlingRecord-triggered Flow? Throw an email to the APEX Email Exception recipients, or emit a Custom Notification.
Hell, better yet throw that logic into a Subflow and call it from wherever.

(Note that if you are in a sandbox with email deliverability set to System Only, regular flow emails and email alerts will not get sent.)

A screen flow with multiple FAULT paths going to proper error handling.

Handling Errors this way allows you to:
- not have your users presented with UNEXPECTED EXCEPTION - YOUR ADMIN DID THINGS BADLY
- maybe deflect a few error messages, in case some things can be fixed by the user doing things differently
- have a better understanding of how often Errors happen.

You want to supercharge your error handling? Audit Nebula Logger to see if it can suit your needs. With proper implementation (and knowledge of how to service it, remember that installed code is still code that requires maintenance), Nebula Logger will allow you to centralize all logs in your organization, and have proper notification when something happens - whether in Flow, APEX, or whatever.

Don't exit loops based on decision checks

The Flow engine doesn't support that well and you will have weird and confusing issues if you ever go back to the main loop.

A flow with a Decision element allowing an exit from a Loop, which is a bad practice.

Don’t do this either - always finish the loop

Issues include variables not being reset, DML errors if you do come back to the loop, and all around general unpredictable situations.
You can still do this if you absolutely NEVER come back to the loop, but it's bad design.

Do not design Flows that will have long Wait elements

This is often done by Admins coming from Workflow or Process Builder space, where you could just say "do that 1 week before contract end date" or "1 day after Opportunity closure". This design is sadly as outdated as the tools that permitted it.
Doing this will have you exceed your Paused Interview limits, and actions just won't be carried out.

A proper handling of "1 day before/after whenever", in Flow, is often via a Scheduled Flow.
Scheduled Flows execute once daily (or more if you use plugins to allow it), check conditions, and execute based on these conditions. In the above case, you would be creating a Scheduled Flow that :

Despite it not being evident in the Salesforce Builder, there is a VERY big difference between the criteria in the Schedule Flow execution start, and an initial GET.
- Putting criteria in the Start Element has less conditions available, but effectively limits the scope of the Flow to only these records, which is great in big environments. It also fires One Flow Interview per Record, and then bulkifies operations at the end - so doing a GET if you put a criteria in the Start element should be done after due consideration only.
- On the opposite, putting no criteria and relying on an initial Get does a single Flow Interview, and so will run less effectively on huge amounts of records, but does allow you to handle more complex selection criteria.

Do not Over-Optimize your Flows

When Admins start becoming great at Flows, everything looks like a Flow.
The issue with that is that sometimes, Admins will start building Flows that shouldn't be built because Users should be using standard features (yes, I know, convincing Users to change habits can be nigh impossible but is sometimes still the right path)... and sometimes, they will keep at building Flows that just should be APEX instead.

If you are starting to hit CPU timeout errors, Flow Element Count errors, huge amounts of slowness... You're probably trying to shove things in Flow that should be something else instead.

APEX has more tools than Flows, as do LWCs. Sometimes, admitting that Development is necessary is not a failure - it's just good design.

On Flow-Specific Design

Flows should have one easily identifiable Triggering Element

This relates to the Naming Conventions.

Flow Type
Triggering Element
Record-Triggered Flows It is the Record that triggers the DML
Event-based Flows It should be a single event, as simple as possible.
Screen Flows This should be either a single recordId, a single sObject variable, or a single sObject list variable. In all cases, the Flow that is being called should query what it needs by itself, and output whatever is needed in its context.
Subflows The rule can vary - it can be useful to pass multiple collections to a Subflow in order to avoid recurring queries on the same object. However, passing multiple single-record variables, or single text variables, to a Subflow generally indicates a design that is overly coupled with the main flow and should be more abstracted.

A screejnshot of a Flow List view

Fill in the descriptions

You'll thank yourself when you have to maintain it in two years.
Descriptions should not be technical, but functional. A Consultant should be able to read your Flow and know what it does technically. The Descriptions should therefore explain what function the Flow provides within the given Domain (if applicable) of the configuration.

A screenshot of Flow descriptions.Descriptions shouldn’t be too technical.

Don't use the "Set Fields manually" part of Update elements

Yes, it's possible. It's also bad practice. You should always rely on a record variable, which you Assign values to, before using Update with "use the values from a record variable". This is mainly for maintenance purposes (in 99% of cases you can safely ignore pink elements in maintenance to know where something is set), but is also impactful when you do multi-record edits and you have to manipulate the record variable and store the resulting manipulation in a record collection variable.

A screenshot of the "Get > Assign > Update" pattern in Flow elements.

A screenshot of the assignment details.

Try to pass only one Record variable or one Record collection to a Flow or Subflow

See "Tie each Flow to a Domain".
Initializing a lot of Record variables on run often points to you being able to split that subflow into different functions. Passing Records as the Triggering Element, and configuration information as variables is fine within reason.

In the example below, the Pricebook2Id variable should be taken from the Order variable.

A screenshot of the Flow Debug run screen.

Try to make Subflows that are reusable as possible.

A Subflow that does a lot of different actions will probably be single-use, and if you need a subpart of it in another logic, you will probably build it again, which may lead to higher technical debt.
If at all possible, each Subflow should execute a single function, within a single Domain.
Yes, this ties into "service-based architecture" - we did say Flows were code.

Do not rely on implicit references

This is when you query a record, then fetch parent information via {MyRecord.ParentRecord__c.SomeField__c}. While this is useful, it’s also very prone to errors (specifically with fields like RecordType ) and makes for wonky error messages if the User does not have access to one of the intermediary records.
Do an explicit Query instead if possible, even if it is technically slower.

Tie each Flow to a Domain

This is also tied to Naming Conventions. Note that in the example below, the Domain is the Object that the Flow lives on. One might say it is redundant with the Triggering Object, except Scheduled Flows and Screen Flows don't have this populated, and are often still linked to specific objects, hence the explicit link.

Domains are definable as Stand-alone groupings of function which have a clear Responsible Persona.

A schema of Domain segregation, illustrating that Domains are self-contained and communication wiht other domains is done via Events.

Communication between Domains should ideally be handled via Events

In short, if a Flow starts in Sales (actions that are taken when an Opportunity closes for example) and finishes in Invoicing (creates an invoice and notifies the people responsible for those invoices), this should be two separate Flows, each tied to a single Domain.

Note that the Salesforce Event bus is mostly built for External Integrations.
The amount of events we specify here is quite high, and as such on gigantic organisations it might not be best practice to handle things this way - you might want to rely on an external event bus instead.

That being said if you are in fact an enterprise admin I expect you are considering the best usecase in every practice you implement, and as such this disclaimer is unnecessary.

A screenshot of two flows which highlight the separation of concerns between domains with an event firing a flow from another one.
Example of Event-Driven decoupling

Avoid cascading Subflows wherein one calls another one that call another one

Unless the secondary subflows are basically fully abstract methods handling inputs from any possible Flow (like one that returns a collection from a multipicklist), you're adding complexity in maintenance which will be costly

Flow Conventions

Flow Structural Conventions - Record-Triggered

As detailed in the General Notes section, these conventions are heavily opinionated towards maintenance and scaling in large organizations. The conventions contain:

    These Record-Triggered Conventions expect you to be familiar with the tools at your disposal to handle order of execution and general Flow Management, including the Flow Trigger Explorer, Scheduled Paths, Entry Criteria (linked: a page that should document entry criteria but doesn't).

    This page directly changes conventions that were emitted by SFXD in 2019, and reiterated in 2021.
    This is because the platform has changed since then, and as such we are recommending new, better, more robust way to build stuff.
    If you recently used our old guides - they are still fine, we just consider this new version to be better practice.

    Record-Triggered Flow Design

    Before Creating a Flow

    Ensure there are no sources of automation touching the Object or Fields

    If the same field is updated in another automation, default to that automation instead, or refactor that automation to Flow.
    If the Object is used in other sources of automation, you might want to default to that as well, or refactor that automation to Flow, unless you can ensure that both that source of automation and the Flow you will create will not cross-impact each other.

    You can leverage "where is this used" in sandbox orgs to check if a field is already referenced in a Flow - or take the HULK SMASH approach and just create a new sandbox, and try to delete the field. If it fails deletion, it'll tell you where it is referenced.

    Verify the list of existing Flows and Entry Criterias you have

    You don't want to have multiple sources of the same entry criteria in Flows because it will make management harder, and you also don't want to have multiple Flows that do almost the same thing because of scale.

    Identifying if you can refactor a Flow into a Subflow that will be called from multiple places is best done before trying to build anything.

    Ask yourself if it can't be a Scheduled Flow instead

    Anything date based, anything that has wait times, anything that doesn't need to be at the instant the record changes status but can instead wait a few hours for the flow to run - all these things can be scheduled Flows. This will allow you to have better save times on records.

    Prioritize BEFORE-save operations whenever possible

    This is more efficient in every way for the database, and avoids recurring SAVE operations.
    It also mostly avoid impacts from other automation sources (apart from Before-Save APEX)
    .
    Designing your Flow to have the most possible before-save elements will save you time and effort in the long run.

    Check if you need to update your bypasses

    Specifically for Emails, using bypasses remains something that is important. Because sending emails to your entire database when you're testing stuff is probably not what you want.

    Consider the worst case

    Do not build your system for the best user but the worst one. Ensure that faults are handled, ensure that a suser subject to every single piece of automation still has a usable system, etc.

    On the number of Flows per Object and Start Elements

    Entry Criteria specify when a Flow is evaluated. It is a very efficient way to avoid Flows triggering unduly and saves a lot of CPU time. Entry Criteria however do require knowledge of Formulas to use fully (the basic "AND" condition doesn't allow a few things that the Formula editor does in fact handle properly), and it is important to note that the entire Flow does not execute if the Entry Criteria isn't met, so you can't catch errors or anything.

    To build open what's written above:

    Logical separation of responsabilities is a topic you'll find not only here but also in a lot of development books.

    Before-Save Flows don't actually require an Update element - this is just for show and to allow people to feel more comfortable with it. You can technically just use Assignments to manipulate the $Record variable with the same effect. It actually used to be the only way to do before-save, but was thought too confusing.

    We used to recommend a single Flow per context. This is obviously no longer the case.

    This is because anything that pattern provided, other tools now provide, and do better.

    The "One flow per Object pattern" was born because:
    - Flows only triggered in after contexts
    - Flows didn't have a way to be orchestrated between themselves
    - Performance impact of Flows was huge because of the lack of entry criteria

    None of that is true anymore.

    The remnant of that pattern still exists in the "no entry criteria, after context, flow that has decision nodes", so it's not completely gone.

    So while the advent of Flow Trigger Explorer was one nail in the coffin for that pattern, the real final one was actual good entry criteria logic.

     

    Entry Criteria are awesome but are not properly disclosed either in the Flow List View, nor the Start Element. Ensure that you follow proper Description filling so you can in fact know how these elements work, otherwise you will need to open every single Flow to check what is happening.

    On Delayed Actions

    Flows allows you to do complex queries and loops as well as schedules. As such, there is virtually no reason to use wait elements or delayed actions, unless said waits are for a platform event, or the delayed actions are relatively short.

    Any action that is scheduled for a month in the future for example should instead set a flag on the record, and let a Scheduled Flow evaluate the records daily to see if they fit criteria for processing. If they do in fact fit criteria, then execute the action.

    A great example of this is Birthday emails - instead of triggering an action that waits for a year, do a Scheduled flow running daily on contacts who's birthday it is. This makes it a lot easier to debug and see what’s going on.

    Flow Conventions

    Flow Structural Conventions - Scheduled

    As detailed in the General Notes section, these conventions are heavily opinionated towards maintenance and scaling in large organizations. The conventions contain:

      Scheduled Flow Design

      As detailed in the Common Core conventions, despite it not being evident in the Salesforce Builder, there is a VERY big difference between the criteria in the Schedule Flow execution start, and an initial GET element in a Scheduled Flow that has no Object defined.

      - Putting criteria in the Start Element has less conditions available, but effectively limits the scope of the Flow to only these records, which is great in big environments. It also fires One Flow Interview per Record, and then bulkifies operations at the end.

      A screenshot of the Start element Entry Criteria.

      An often-done mistake is to do the above selection, say "Accounts where Active = TRUE" for example, and then doing a Get Records afterwards, querying the accounts again, because of habits tied to Record-Triggered Flows.
      If you do this, you are effectively querying the entire list of Accounts X times, where X is the number of Accounts in your original criteria. Which is bad.


      - On the opposite, putting no criteria and relying on an initial Get does a single Flow Interview, and so will run less effectively on huge amounts of records, but does allow you to handle more complex selection criteria.

      A screenshot of a Get Records with a description, the description is opened in pop-up view.

      In the first case, you should consider that there is only one record selected by the Flow, which is populated in $Record - much like in Record-Triggered Flows.
      In the second screenshot, you can see that the Choose Object is empty, but the GET is done afterwards - $Record is as such empty, but the Get Active Accounts will generate a collection variable containing multiple accounts, which you will need to iterate over (via a loop element) to handle the different cases

      Flow Conventions

      Flow Naming Conventions

      Meta-Flow Naming

      1. A Flow name shall always start by the name of the Domain from which it originates, followed by an underscore.
        In most cases, for Flows, the Domain is equivalent to the Object that it is hosted on.
        As per structural conventions, cross-object Flows should be avoided and reliance on Events to synchronize flows that do cross-object operations should be used.

        In Account_BeforeSave_SetClientNumber, the Domain is Account, as this is where the automation is started. It could also be something like AccountManagement , if the Account Management team owned the process for example.

      2. The Domain of the shall be followed by a code indicating the type of the Flow, respecting the cases as follows:

        1. If the flow is a Screen Flow, the code shall be SCR.

        2. If the flow is a SubFlow, the code shall be SFL.

        3. If the flow is specifically designed to be a scheduled flow that runs on a schedule, the code shall be SCH.

        4. If the flow is a Record Triggered flow, the code shall instead indicate the contexts in where the Record Triggered Flow executes.
          In addition, the flow name shall contain the context of execution, meaning either Beforeor After, followed by either Create, Update or Delete.

        5. If the flow is an Event Triggered flow, the code shall be EVT instead.

        6. If the flow is specifically designed to be a Record Triggered flow that ONLY handles email sends, the code shall be EML instead.

          In Account_AftercreateAftersave_StatusUpdateActions, you identify that it is Record-Triggered, execute both on creation and update, in the After Context, and that it carries out actions related to when the entry criteria (the status has changed) are met.

          In the case of Invoice_SCR_CheckTaxExemption, you know that it is a Screen Flow, executing from the Invoice Lightning Page, that handles Tax Exemption related matters.

      3. A Flow name shall further be named after the action being carried out in the most precise manner possible. For Record Triggered Flows, this is limited to what triggers it. See example table for details.

      4. A Flow Description should always indicate what the Flow requires to run, what the entry criteria are, what it does functionally, and what it outputs.

      Type

      Name

      Description

      Screen Flow

      Quote_SCR_addQuoteLines

      [Entry = None]
      A Screen flow that is used to override the Quote Lines addition page. Provides function related to Discount calculation based on Discounts_cmtd.

      Scheduled Flow

      Contact_SCH_SendBirthdayEmails

      [Entry = None]
      A Scheduled flow that runs daily, checks if a contact is due a Birthday email, and sends it using the template marked Marketing_Birthday

      Before Update Flow, on Account

      Account_BeforeUpdate_SetTaxInformation

      [Entry = IsChanged(ShippingCountry)]

      Changes the tax information, rate, and required elements based on the new country.

      After Update Flow, on Account

      Account_AfterUpdate_NewBillingInfo

      [Entry = IsChanged(ShippingCountry)]
      Fetches related future invoices and updates their billing country and billing information.
      Also sends a notification to Sales Support to ensure country change is legitimate.

      Event-Triggered Flow, creating Invoices, which triggers when a Sales Finished event gets fired

      Invoice_EVT_SalesFinished

      Creates an Invoice and notifies Invoicing about the new invoice to validate based on Sales information

      Record-triggered Email-sending Flow, on Account.

      Account_EML_AfterUpdate

      [Entry = None]
      Handles email notifications from Account based on record changes.

      Flow Elements

      DMLs

      1. Any Query shall always start by Getfor any Objects, followed by an underscore, or Fetchfor CMTD or Settings.

      2. Any Update shall always start by Updatefollowed by an underscore. If it Updates a Collection, it shall also be prefixed by Listafter the aforementioned underscore.

      3. Any Create shall always start by Createfollowed by an underscore. If it Creates a Collection, it shall also be prefixed by Listafter the aforementioned underscore.

      4. Any Delete shall always start by Delfollowed by an underscore. If it Deletes a Collection, it shall also be prefixed by Listafter the aforementioned underscore.

      Type

      Name

      Description

      Get accounts matching active = true
      Get_ActiveAccounts
      Fetches all accounts where IsActive = True
      Update Modified Contacts
      Update_ListModifiedContacts
      Commits all changes from previous assignments to the database
      Creates an account configured during a Screen Flow in a variable called var_thisAccount
      Create_ThisAccount
      Commits the Account to the database based on previous assignments.

      Interactions

      1. Any Screen SHALL always start by S, followed by a number corresponding to the current number of Screens in the current Flow plus 1, followed by an underscore.

      2. Any Action SHALL always start by ACT, followed by an underscore. The Action Name SHOULD furthermore indicate what the action carries out.

        • Any APEX Action SHALL always start by APEX instead, followed by an underscore, followed by a shorthand of the outcome expected. Properly named APEX functions should be usable as-is for naming.

        • Any Subflow SHALL always start by SUB instead, followed by an underscore, followed by the code of the Flow triggered (FL01 for example), followed by an underscore, followed by a shorthand of the outcome expected.

      3. Any Email Alert SHALL always start by EA, followed by an underscore, followed by the code of the Email Template getting sent, an underscore, and a shorthand of what email should be sent.


      Type

      Name

      Description

      Screen within a Flow

      Label: Select Price Book Entries

      Name: S01_SelectPBEs

      Allows selection of which products will be added to the quote, based on pricebookentries fetched.

      Screen that handles errors based on a DML within a Flow

      SERR01_GET_PBE

      Happens if the GET on Pricebook Entries fails. Probably related to Permissions.

      Text element in the first screen of the flow

      S01_T01

      Fill with actual Text from the Text element - there is no description field

      DataTable in the first screen of the flow

      S01_LWCTable_Products

      May be inapplicable as the LWCs may not offer a Description field.


      Example of a Screen containing a Text element

      Screen Elements

      1. Any variable SHALL always start by var followed by an underscore.

        • Any variable that stores a Collection SHALL always in addition start by coll followed by an underscore.

        • Any variable that stores a Record SHALL always in addition start by sObj followed by an underscore.

        • Any other variable type SHALL always in addition start by an indicator of the variable type, followed by an underscore.

      2. Any formula SHALL always start by form followed by an underscore, followed by the data type returned, and an underscore.

      3. Any choice SHALL always start by ch followed by an underscore. The Choice name should reflect the outcome of the choice.

      Type

      Name

      Description

      Formula to get the total number of Products sold

      formula_ProductDiscountWeighted

      Weights the discount by product type and calculates actual final discount. Catches null values for discounts or prices and returns 0.

      Variable to store the recordId

      recordId

      Stores the record Id that starts the flow.


      Exempt from normal conventions because legacy Salesforce behavior.
      Note: This var name is CASE SENSITIVE.

      Record that we create from calculated values in the Flow in a Loop, before storing it in a collection variable to create them all

      sObj_This_OpportunityProduct

      The Opportunity Product the values of which we calculate.

      A screenshot of the element manager.

      Screenshot from the Manager, with examples of Variables and Screen elements

      Logics

      1. Any Decision SHALL start by DEC if the decision is an open choice, or CHECK if it is a logical terminator, followed by an underscore. The Action Name SHOULD furthermore be prefixed by Is, Can, or another adverb indicating the nature of the decision, as well as a short description of what is checked.

        • Any Decision Outcome SHALL start with the Decision Name without any Prefixes, followed by an underscore, followed by the Outcome.

        • The Default Outcome SHOULD be used for error handling and relabeled ERROR where applicable - you can relabel the default outcome!

      2. Any Assignment SHALL always start with SET, ASSIGN, STOREREMOVE or CALC (depending on the type of the assignation being done) followed by an underscore.

        • SET SHOULD be used for variable updates, mainly for Object variables, where the variable existed before.

        • ASSIGN SHOULD be used for variable initialization, or updates on Non-Object variables.

        • STORE SHOULD be used for adding elements to Collections.

        • REMOVE SHOULD be used for removing elements from Collections.

        • CALC SHOULD be used for any mathematical assignment or complex collection manipulation.

      3. Any Loop SHALL always start with LOOP, followed by an underscore, followed by the description of what is being iterated over. This can vary from the Collection name.

      Type

      Name

      Description

      Assignment to set the sObj_This_OpportunityProduct record values

      SET_OppProdValues

      Sets the OppProd based on calculated discounts and quantities.

      Assignment to store the Opportunity Product for later creation in a collection variable

      Name: STORE_ThisOppProd
      Assignment: {!sObj_coll_OppProdtoCreate}  Add  {!sObj_This_OpportunityProduct}

      Adds the calculated Opp Prod lines to the collvar to create.

      DML to create multiple records store in a collection sObj variable

      CREATE_OppProds

      Creates the configured OppProd.

      Decision to check selected elements for processing

      Decision: CHECK_PBESelected
      Outcome one:
      CHECK_PBESelected_Yes
      Outcome two:
      CHECK_PBESelected_No
      Default Outcome: Catastrophic Failure

      Check if at least one row was selected. Otherwise terminates to an error screen.

      Decision to sort elements based on criteria

      Decision: DEC_SortOverrides
      Outcome one:
      SortOverrides_Fields
      Outcome two:
      SortOverrides_Values
      Outcome three:
      SortOverrides_Full
      Default Outcome: Catastrophic Failure

      Based on user selection, check if we need to override information within the records, and which information needs to be overridden.

      Email Alert sent from Flow informing user of Invoice reception

      EA01_EI10_InvoiceReceived

      Sends template EI10 with details of the Invoice to pay

      Deployments

      Deployment Best Practices. Focuses on CI/CD as this is the current best practice.

      Deployments

      Introduction - Why are we even doing it like this

      Salesforce deployments are essential for managing and evolving Salesforce environments, especially in a consulting company setting. There are several methods for deploying metadata between organizations, including Change Sets, the Metadata API, and the Salesforce Command Line Interface (CLI). Each method has its unique advantages, but the introduction of Salesforce DX (SFDX) has revolutionized the process, making SFDX-based deployments the standard for the future.

      The main reasons are because it is easy to deploy, and easy to revert to a prior version of anything you deploy as well - proper CI/CD depends on GIT being used, which ensures that everything you do can be rolled back in case of bugs.

      A table of deployment methods with advantages and disadvantages
      Deployment Method Advantages Disadvantages
      Change Sets

      - Easy to use with a graphical interface

      - No additional setup required

      - Limited to connected orgs

      - Manual and time-consuming

      - No version control

      - Can be done ad-hoc

      Metadata API

      - Supports complex deployments

      - Can be automated

      - Broad coverage

      - Requires programming knowledge

      - Steeper learning curve

      Salesforce CLI (SFDX)

      - Advanced automation

      - Supports modern DevOps practices

      - Version control

      - Steeper learning curve

      - Initial setup and configuration required

      - Requires trained staff to maintain

      Third-Party Tools

      - User-friendly interfaces

      - Advanced features and integrations

      - Additional costs

      - May have proprietary limitations

      Despite the complexity inherent in SFDX-based deployments, the benefits are substantial. They enable easy and frequent deployments, better testing by customers, smoother go-lives, and a general reduction in stress around project development and deployment cycles. The structured approach of SFDX ensures that deployments are reliable, repeatable, and less prone to errors.

      To stay fact-based: SFDX deployments allow deploying multiple times a week in a few minutes per deployment. This allows very easy user testing, and also allows finding why a specific issue cropped up. You can check the Examples section to see how and why this is useful.

      It is perfectly true that these deployments require more technical knowledge than third-party tools like Gearset or Changesets. It is our opinion that the tradeoff in productivity is worth the extra training and learning curve.

      One thing that is often overlooked  - you can NOT do proper CI/CD without plugging the deployment to your project management. This means the entire project management MUST be thought around the deployment logic.

      This training is split into the following chapters:

      Deployments

      Chapter 1: The Why, When and By Whom

      This chapter explores the fundamental considerations of Salesforce deployments within the context of consulting projects. It addresses:

      Why do I Deploy ?

      In traditional software development, deployments often occur to migrate changes between environments for testing or production releases. However, in the context of Continuous Integration (CI) and Salesforce development, deployments are just synchronization  checkpoints for the application, irrelevant of the organization.

      Said differently, in CI/CD Deployments are just a way to push commits to the environments that require them.

      CI deployments are frequent, automated, and tied closely to the development cycle.

      Deployments are never the focus in CI/CD, and what is important is instead the commits and the way that they tie into the project management - ideally into a ticket for each commit.

      In software development, a commit is the action of saving changes to a version-controlled repository. It captures specific modifications to files, accompanied by a descriptive message. Commits are atomic, meaning changes are applied together as a single unit, ensuring version control, traceability of changes, and collaboration among team members.

      Commits are part of using Git. 
      Git is a distributed version control system used to track changes in source code during software development. It is free and widely used, within Salesforce and elsewhere.

      So if deployments are just here to sync commits...

      Why do I commit ?

      As soon as a commit is useful, or whenever a day has ended.


      Commits should pretty much be done "as soon as they are useful", which often means you have fulfilled one of the following conditions:

      This will allow you to pull your changes from the org, commit your changes referencing the ticket number in the Commit Message, and then push to the repository.
      This will allow others to work on the same repository without issues and to easily find and revert changes if required.

      You should also commit to your local repository whenever the day ends - in any case you can squash those commits together when you merge back to Main, so trying to delay commits is generally a bad idea.

      Take the Salesforce-built "Devops Center" for example.
      They tie every commit to a Work Item and allow you to chose which elements from the metadata should be added to the commit. They then ask you to add a quick description and you're done.
      This is the same logic we apply to tickets in the above description.

      If you're wondering "why not just use DevOps Center", the answer is generally  "you definitely should if you can, but you sometimes can't because it is proprietary and it has limitations you can't work around".
      Also because if you learn how to use the CLI, you'll realise pretty fast that it goes WAY faster than DevOps Center.

      To tie back to our introduction - this forces a division of work into Work Items, Tickets, or whatever other Agile-ism you use internally, and the project management level.

      DevOps makes sense when you work iteratively, probably in sprints, and when the work to be delivered is well defined and packaged.

      This is because....

      When do I Deploy ?

      Pretty much all the time, but not everywhere.

      In Salesforce CI/CD, the two main points of complexity in your existing pipeline are going to be:

      The reasons for this are similar but different.

      In the case of the first integration of a commit into the pipeline, most of the time, things should be completely fine. The problem is one that everyone in the Salesforce space knows very well. The Metadata API sucks. And sadly, SFDX... also isn't perfect.
      So sometimes, you might do everything right, but the MDAPI will throw some file or some setting that while valid in output, is invalid in input. Meaning Salesforce happily gives you something you can't deploy.
      If this happens, you will get an error when you first try to integrate your commit to an org. This is why some pre-merge checks ensure that the commit you did can be deployed back to the org.

      In the case of merging multiple commits, the reasons is also that the Metadata API sucks. It will answer the same calls with metadata that is not ordered the same way within the same file, which will lead Git to think there's tons-o-changes... Except not really. This is mostly fine as long as you don't have to merge your work with someone else's where they worked on the same piece of metadata - if so, there is a non-zero chance that the automated merging will fail.

      In both cases, the answer is "ask your senior how to solve this if the pipeline errors out". In both cases also, the pipeline should be setup to cover these cases and error out gracefully.

      "What does that have to do with when I deploy? Like didn't you get lost somewhere?"

      The relation is simple - you should deploy pretty much ASAP to your remote repo, and merge frequently to the main work repository. You should also pull the remote work frequently to ensure you are in sync with others.
      Deploying to remote will run the integration checks to ensure things can be merged, and merging will allow others to see your work. Pulling the other's work will ensure you don't overwrite stuff.

      Deploying to QA or UAT should be something tied to the project management cycle and is not up to an individual contributor.
      For example, you can deploy to QA every sprint end, and deploy to UAT once EPICs are flagged as ready for UAT (a manual step).

      Who Deploys ?

      Different people across the lifecycle of the project.

      On project setup, the DevOps engineer that sets up the pipeline should deploy and setup.
      For standard work, you should deploy to your own repo, and the automated system should merge to common if all's good.
      For end of sprints, the automated pipeline should deploy to QA.
      For UAT, the Architect assigned to the project should run the required pipelines.

      In most cases, the runs should be automatic, and key points should be covered by technical people.

      Deployments

      Chapter 2: Software List

      This chapter explores the actual tools we are using in our example, the basic understanding needed for each tool, and an explanation of why we're doing things this way.

      In short, our example relies on:

      You can completely use other tools if your project, your client, or your leadership want you do use other things.
      The main reason we are using these in this example is that it relies on a tech stack that is very present with customers and widely used at a global level, while also leveraging reusable things as much as possible - technically speaking a lot of the configuration we do here is directly reusable in another pipeline provider, and the link to tickets is also something that can be integrated using another provider.

      In short "use this, or something else if you know what you're doing".

      So What are we using

      The CLI

      The first entrypoint into the pipeline is going to be the Salesforce Command Line.You can download it here.
      If you want a graphical user interface, you should set up VSCode, which you can do by following this Trailhead. You can start using the CLI directly via the terminal if you already know what you're doing otherwise. If you're using VSCode, download Azul as well to avoid errors down the line.

      We'll be using the Salesforce CLI to:
      - login to organizations, and avoid that pesky MFA;
      - pull changes from an organization once our config is done;
      - rarely, push hotfixes to a UAT org.

      For some roles, mainly architects and developers, we will also use it to:

      What this actually does is allow you to interact with Salesforce. We will use it to get the configuration, security, and setting files that we will then deploy.

      This allows us not only to deploy, but also to have a backup of the configuration, and an easy way to edit it via text edition software.

      The configuration needed is literally just the installation to start - we'll set up a full project later down the line.

      GIT

      You'll then need to download Git, as well as a GUI if you're not used to using it directly from the command line. Git is VERY powerful but also quite annoying to learn fully, which is why we will keep its usage simple in our case.

      We'll be using Git to:
      - version our work so we can easily go back to earlier configurations in case of issues;
      - document what we did when we modified something;
      - get the work that other people have done;
      - upload our work to the repositories for the project.

      You'll need a bit more configuration once you're done installing - depending on the GUI you use (or if you're using the command line) the how depends on the exact software, but in short you'll need to configure git with your user name and your user email.

      Logging in to Bitbucket and getting your repository from there will come later - once you've given your username and email, and configured your UI, we will consider that you are done for now.

      If you're a normal user, this is all you'll see of git.
      If you're a Dev or an Architect, you'll also be using the Branches and Merges functions of Git - mostly through the Bitbucket interface (and as such, with Pull Requests instead of Merges).

      Bitbucket

      As said in intro, we're using bitbucket because we're using bitbucket. You can use Github, Gitlab, Gitea, whatever - but this guide is for bitbucket.

      Bitbucket, much like Salesforce, is a cloud solution. It is part of the Atlassian cloud offering, which also hosts JIRA, which we'll be configuring as well. You'll need to authenticate to your workspace (maybe get your Administrator to get you logins), in the format https://bitbucket.org/myworkspace

      You will see that Bitbucket is a Git Server that contains Git Repositories.
      In short, it is the central place where we'll host the different project repositories that we are going to use.
      Built on top of the Git server are also subordinate functions such as Pull Requests, Deployments, Pipelines -  which we're all going to use.

      Seeing as we want this to be connected with our Atlassian cloud, we'll also ask you to go to https://bitbucket.org/account/settings/app-passwords/ which allows you to create application passwords, and to create one for Git.

      In detail:

      Extra Stuff

      CLI Extensions

      SGD

      SGD, or Salesforce-Git-Delta is a command line plugin that allows the CLI to automatically generate a package.xml and a destructivechanges.xml based on the difference between two commits.
      It allows you to do in Git what the CLI does alone using Source Tracking.
      Why is it useful then ? Because Source Tracking is sometimes buggy, and also because in this case we're using Bitbucket, so it makes generating these deployment files independent from our machines.
      SGD is very useful for inter-org deployment, which should technically be quite rare.

      SFDMU

      SFDMU, or the Salesforce Data Move Utility, is another command line plugin which is dataloader on steroids for when you want to migrate data between orgs or back stuff up to CSVs.
      We use this because it allows migrating test data or config data (that last one should be VERY rare what with the presence of CMTD now) very easily including if you have hierarchies (Contacts of Accounts, etc).

      Code Analyzer

      AzulJDK

      Basically just Java, but free. We don't use the old Java runtime because licensing is now extremely expensive.

      A terminal emulator

      If you don't spend a lot of time in the Terminal, you might not see that terminals aren't all equal.
      A nice terminal emulator gives you things like copy/paste, better UX in general.
      It's just quality of life.

      A text editor

      You should use VSCode unless you really want to do everything in separate apps.
      If you're an expert you can use whatever floats your boat.

       

       



      Deployments

      Chapter 3: Basic Machine Setup

      1 - Install Local Software

      If you are admin on your machine, download Visual Studio Code from this link. Otherwise, use whatever your IT has to install software, whether it be Software Center, opening a ticket, or anything else of that ilk.
      As long as you're doing that, you can also install a JDK like AZUL, as well as Git, and a nice terminal emulator.
      Also remember to install the Salesforce CLI.

      These elements are all useful down the line, and doing all the setup at once avoids later issues.

      2 - Configure the CLI

      Opening your beautiful terminal emulator, run

      sf update

      You should see @salesforce/cli: Updating CLI run for a bit.

      If you see an error saying  sf is not a command or program, something went wrong during the installation in step 1. Contact your IT (or check the installation page of the CLI if you're Admin or not in an enterprise context).

      Once that's done, run

      echo y | sf plugins install sfdmu sfdx-git-delta code-analyze

      Because sgd is not signed, you will get a warning saying that "This plugin is not digitally signed and its authenticity cannot be verified". This is expected, and you will have to answer y (yes) to proceed with the installation.

      Once you've done that, run:

      git config --global user.name "FirstName LastName" replacing Firstname and Lastname with your own.

      git config --global user.email "email@server.tld" replacing the email with yours

      If you're running Windows - git config --global core.autocrlf true

      If you're running Mac or Linux - git config --global core.autocrlf input

      The above commands tell git who you are, and how to handle line endings.
      All of this setup has to be done once, and you will probably never touch it again.

      Finally, run

      java --version

      If you don't see an error, and you see something like openjdk 21.0.3 2024-04-16 LT then you installed Zulu properly and you're fine.

      3 - Configure VSCode

      Open up VSCode.

      Go to the Extensions in the side panel (it looks like three squares) screenshot of VSCode extensions icon and search for "Salesforce", then install

      Then search for Atlassian and install "Jira and Bitbucket (Atlassian Labs)".

      Finally, search for and install "GitLens - Git supercharged".

      Then go to Preferences > Settings > Salesforcedx-vscode-core: Detect Conflicts At Sync and check this checkbox.

      Once all this is done, I recommend you go to the side panel, click on Source Control, and drag-and-drop both the Commit element and the topmost element to the right of the editor.

      All this setup allows you to have more visual functions and shortcuts. If you fail to install some elements, it cannot be guaranteed that you will have all the elements you are supposed to.

      This concludes basic machine setup.
      All of this should not have to be done again on an already configured machine.


      Deployments

      Chapter 4 - Base Project Setup

      This chapter explores how to set up your project management and version control integration, ensuring proper tracking from requirement to deployment.

      Initial Project Creation

      SFDX Project Setup

      Create Base Project

      sf project generate
          --name "your-project-name"
          --template standard
          --namespace "your_namespace"  # if applicable
          --default-package-dir force-app
      

      Required Project Structure

      your-project-name/
      ├── config/
      │   └── project-scratch-def.json
      ├── force-app/
      │   └── main/
      │       └── default/
      ├── scripts/
      │   ├── apex/
      │   └── soql/
      ├── .forceignore
      ├── .gitignore
      ├── package.json
      └── sfdx-project.json
      

      Configuration Files Setup

      .forceignore Essential Entries

      # Standard Salesforce ignore patterns
      **/.eslintrc.json
      **/.prettierrc
      **/.prettierignore
      **/.sfdx
      **/.sf
      **/.vscode
      **/jsconfig.json
      
      # Package directories
      **/force-app/main/default/profiles
      **/force-app/main/default/settings
      

      .gitignore Essential Entries


      # Salesforce cache
      .sf/
      .sfdx/
      .localdevserver/
      
      # VS Code IDE
      .vscode/
      
      # System files
      .DS_Store
      *.log
      


      Bitbucket Repository Integration

      Initial Repository Setup

      In Bitbucket:

      - Create new repository
      - Repository name: your-project-name
      - Access level: Private
      - Include README: Yes
      - Include .gitignore: No (we'll use our own)
      

      Linking Local Project to Remote Repository

      Initialize Git Repository


      cd your-project-name
      git init
      git add .
      git commit -m "Initial project setup"
      

      git remote add origin https://bitbucket.org/your-workspace/your-project-name.git
      git push -u origin main
      

      Branch Protection Rules

      Configure in Bitbucket Repository Settings:

      YAML
      Branch Permissions:
        main:
          - Require pull request approvals
          - Minimum approvers: 2
          - Block force pushes
        develop:
          - Require pull request approvals
          - Minimum approvers: 1
          - Block force pushes
      

      Project Configuration Files

      sfdx-project.json Configuration

      JSON
      {
          "packageDirectories": [
              {
                  "path": "force-app",
                  "default": true,
                  "package": "your-project-name",
                  "versionName": "Version 1.0",
                  "versionNumber": "1.0.0.NEXT"
              }
          ],
          "namespace": "",
          "sourceApiVersion": "60.0"
      }
      

      project-scratch-def.json Base Configuration

      JSON
      {
          "orgName": "Your Project Name",
          "edition": "Enterprise",
          "features": ["EnableSetPasswordInApi"],
          "settings": {
              "lightningExperienceSettings": {
                  "enableS1DesktopEnabled": true
              },
              "securitySettings": {
                  "passwordPolicies": {
                      "enableSetPasswordInApi": true
                  }
              }
          }
      }
      

      Post-Setup Verification

      Run these commands to verify setup:

      Bash
      # Verify SFDX project
      sf project verify
      
      # Verify Git setup
      git remote -v
      
      # Verify Bitbucket connection
      git fetch origin
      
      # Verify branch protection
      git push origin main --dry-run

      JIRA Configuration

      Required JIRA Workflow States

      Text Only
      Backlog -> In Progress -> In Review -> Ready for Deploy -> Done
      

      Bitbucket Integration

      Work Segmentation

      Story Creation Rules

      Stories should be:
      - Independent (can be deployed alone)
      - Small enough to be completed in 1-3 days
      - Tagged with proper metadata types
      - Linked to an Epic

      Required Story Fields

      Integration Setup

      JIRA to Bitbucket Connection

      1. In JIRA:

        • Navigate to Project Settings
        • Enable "Development" integration
        • Link to Bitbucket repository
      2. In Bitbucket:

        • Configure branch policies
        • Setup automatic JIRA issue transitions
        • Enable smart commits

      Commit Message Format

      Text Only
      [PROJ-123] Brief description
      
      - Detailed changes
      - Impact on existing functionality
      - Related configuration
      

      Pipeline Configuration

      Get the bitbucket-pipelines.yml file

      Integrate it and set up the variables

      Automation Rules

      JIRA Automation

      Bitbucket Pipelines

      Security and Access

      Required Team Roles

      Access Matrix

      Text Only
      Role          | JIRA | Bitbucket | Salesforce
      Project Admin | Admin| Admin     | System Admin
      Developer     | Write| Write     | Developer
      QA           | Write| Read      | Read-only
      

      Remember that this setup needs to be done only once per project, but maintaining the discipline of following these structures is crucial for successful CI/CD implementation.

      The key to success is ensuring that:
      1. Every piece of work has a ticket
      2. Every commit links to a ticket
      3. Every deployment is traceable
      4. All changes are reviewable

      This structured approach ensures that your project management directly ties into your deployment pipeline, making it easier to track changes and maintain quality throughout the development lifecycle.