# Best Practices # Field Conventions General conventions about field creation, grouping, naming, etc. # General Conventions All Naming Conventions Are [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt) and [RFC 6919](https://tools.ietf.org/html/rfc6919) compliant. 1. All field API names **MUST** be written in English, even when the label is in another language. 2. All field API names **MUST** be written in PascalCase. 3. Fields **SHOULD NOT** contain an underscore in the fields name, except where explicitly defined otherwise in these conventions. 4. Fields generally **MUST (but you probably won't)** contain a description. 5. In all cases where the entire purpose of the field is not evident by reading the name, the field **MUST** contain a description. 6. If the purpose of the field is ambiguous, the field **MUST** contain a help text. In cases where the purpose is clear, the help text **COULD** also be defined for clarity's sake. 7. Field API names should respect the following prefixes and suffixes.2 Prefixes and Suffixes **SHALL NOT** be prepended by an underscore.
Field Type | Prefix | Suffix |
---|---|---|
MasterDetail | `Ref` | |
Lookup | `Ref` | |
Formula | `Auto` | |
Rollup Summary | `Auto` | |
Filled by automation (APEX)1 | `Trig` | |
Picklist or Multipicklist | `Pick` | |
Boolean | `Is` or `IsCan`3 |
Object | Field type | Comment | Field Label | Field API Name | Field Description |
---|---|---|---|---|---|
Case | Lookup | Looks up to Account | Service Provider | ServiceProviderRef\_\_c | Links the case to the Service Provider who will conduct the task at the client's. |
Account | Formula | Made for the Accounting department only | Solvability | Accounting\_SolvabilityAuto\_\_c | Calculates solvability based on revenue and expenses. Sensitive data, should not be shared. |
Contact | Checkbox | Sponsored ? | IsSponsored\_\_c | Checked if the contact was sponsored into the program by another client. |
**Workflow Rules (along with Process Builders) are now on a deprecation / End-of-Life plan. Existing Workflow Rules will continue to operate for the foreseeable future, but in the near future (Winter 23) Salesforce will begin to prevent creating new Workflow Rules. Establish a plan to migrate to Flows, and create any new automation using Flow Builder.** *These naming convention best practices will remain in place for reference purposes so long as Workflow Rules may exist in a Salesforce org*
A workflow trigger ***MUST*** always be named after what Triggers the workflow, and not the actions. In all cases possible, the number of triggers per object ***SHOULD*** be limited - reading the existing trigger names allows using existing ones when possible. Knowing that all automations count towards Salesforce allotted CPU time per record, a consultant ***SHOULD*** consider how to limit the number of workflows in all cases. 1. All Workflow Triggers ***MUST*** contain a Bypass Rule check. 2. A Workflow Trigger ***SHALL*** always start by `WF`, followed by a number corresponding to the number of workflows on the triggering Object, followed by an underscore. 3. The Workflow Trigger name ***MUST*** try to explain in a concise manner what triggers the WF. Note that conciseness trumps clarity for this field. 4. All Workflows Trigger ***MUST*** have a description detailing how they are triggered. 5. Wherever possible, a Consultant ***SHOULD*** use operators over functions. #### ExamplesObject | WF Name | Description | WF Rule |
---|---|---|---|
Invoice | WF01\_WhenInvoicePaid | This WF triggers when the invoice Status is set to "Paid". Triggered from another automation. | `!$User.BypassWF__c && ISPICKVAL(Status__c, "Paid")` |
Invoice | WF02\_CE\_WhenStatusChanges | This WF triggers every time the Status of the invoice is changed. | `!$User.BypassWF__c && ISCHANGED(Status__c)` |
Contact | WF01\_C\_IfStreetBlank | This WF triggers on creation if the street is Blank | `!$User.BypassWF__c && ISBLANK(MailingStreet)` |
**Workflow Rules (along with Process Builders) are now on a deprecation / End-of-Life plan. Existing Workflow Rules will continue to operate for the foreseeable future, but in the near future (Winter 23) Salesforce will begin to prevent creating new Workflow Rules. Establish a plan to migrate to Flows, and create any new automation using Flow Builder.**
1. A Workflow Field Update ***MUST*** Start with `FU`, followed by a number corresponding to the number of field updates on the triggering Object. 2. A Workflow Field Update ***SHOULD*** contain the Object name, or an abbreviation thereof, in the Field Update Name.1 3. A Workflow Field Update ***MUST*** be named after the field that it updates, and then the values it sets, in the most concise manner possible. 4. The Description of a Workflow Field Update ***SHOULD*** give precise information on what the field is set to. #### ExamplesObject | FU Name | Description |
---|---|---|
Contact | FU01\_SetEmailOptOut | Sets the Email Opt Out checkbox to TRUE. |
Invoice | FU02\_SetFinalBillingStreet | Calculates the billing street based on if the client is billed alone, via an Agency, or via a mother company. Part of three updates that handle this address. |
Contact | FU03\_CalculateFinalAmount | Uses current Tax settings and information to set the final amount |
Workflow Rules (along with Process Builders) are now on a deprecation / End-of-Life plan. Existing Workflow Rules will continue to operate for the foreseeable future, but in the near future (Winter 23) Salesforce will begin to prevent creating new Workflow Rules. Establish a plan to migrate to Flows, and create any new automation using Flow Builder. **Email Alerts are NOT part of the Workflow Rule deprecation plan- you can and should continue to configure and use Email Alerts. Flows can reference and execute these Email Alerts**
1. A Workflow Email Alert ***MUST*** Start with `EA`, followed by a number corresponding to the number of email alerts on the triggering Object. 2. A Workflow Email Alert ***SHOULD*** contain the Object name, or an abbreviation thereof, in the Field Update Name. 3. A Workflow Email Alert's Unique Name and Description ***SHOULD*** contain the exact same information, except where a longer description is absolutely necessary.1 4. A Workflow Email Alert ***SHOULD*** be named after the type of email it sends, or the reason the email is sent. Note that declaratively, the Name of the template used to send the email is always shown by default in Email Alert lists. #### ExamplesObject | EA Name | Description |
---|---|---|
Invoice | EA01\_Inv\_SendFirstPaymentReminder | EA01\_Inv\_SendFirstPaymentReminder. |
Invoice | EA02\_Inv\_SendSecondPaymentReminder | SendSecondPaymentReminder |
Contact | EA03\_Con\_SendBirthdayEmail | EA03\_Con\_SendBirthdayEmail |
Workflow Rules (along with Process Builders) are now on a deprecation / End-of-Life plan. Existing Workflow Rules will continue to operate for the foreseeable future, but in the near future (Winter 23) Salesforce will begin to prevent creating new Workflow Rules. Establish a plan to migrate to Flows, and create any new automation using Flow Builder.
1. A Workflow Task Unique Name ***MUST*** Start with `TSK`, followed by a number corresponding to the number of tasks on the triggering Object. 2. A Workflow Task Unique Name ***COULD*** contain the Object name, or an abbreviation thereof, in the Field Update Name. This is to avoid different conventions for Workflow Actions in general. Most information about tasks are displayed by default declaratively, and creating a task should rarely impact internal processes or external processes in such a manner that urgent debugging is required. As Users will in all cases never see the Unique Name of a Workflow Task, it is not needed nor recommended to norm them more than necessary. # Workflow Outbound MessagesWorkflow Rules (along with Process Builders) are now on a deprecation / End-of-Life plan. Existing Workflow Rules will continue to operate for the foreseeable future, but in the near future (Winter 23) Salesforce will begin to prevent creating new Workflow Rules. Establish a plan to migrate to Flows, and create any new automation using Flow Builder. **Outbound Messages are NOT part of the Workflow Rule deprecation plan- you can and should continue to configure and use Outbound Messages when appropriate. Flows can reference and execute these Outbound Messages**
1. An Outbound Message Name ***MUST*** Start with `OM`, followed by a number corresponding to the number of outbound messages on the triggering Object. 2. An Outbound Message Name ***COULD*** contain the Object name, or an abbreviation thereof, in the Field Update Name. This is to avoid different conventions for Workflow Actions in general. 3. An Outbound Message ***MUST*** be named after the Service that it send information to, and then information it sends in the most concise manner possible. 4. The Description of An Outbound Message ***SHOULD*** give precise information on why the Outbound Message is created. 5. Listing the fields sent by the Outbound Message is ***NOT RECOMMENDED***. #### ExamplesObject | EA Name | Description |
---|---|---|
Invoice | OM01\_Inv\_SendBasicInfo | Send the invoice header to the client software. |
Invoice | OM02\_Inv\_SendStatusPaid | Sends a flag that the invoice was paid to the client software. |
Contact | OM01\_SendContactInfo | Sends most contact information to the internal Directory. |
Name | Formula | Error Message | Description |
---|---|---|---|
OPP\_VR01\_CancelReason | `!$Setup.Bypasses__c.IsBypassVR__c && TEXT(Cancellationreason__c)="Other" &&ISBLANK(OtherCancellationReason__c)` | If you select "other" as a cancellation reason, you must fill out the details of that reason. \[OPP\_VR01\] | Prevents selecting "other" less reason without putting a comment in. \[OPP\_VR01\] |
OPP\_VR02\_NoApprovalCantReserve | `!$Setup.Bypasses__c.IsBypassVR__c && !IsApproved__c && ( ISPICKVAL(Status__c,"Approved - CC ") || ISPICKVAL(Status__c,"Approved - Client") || ISPICKVAL(Status__c,"Paid") )` | The status cannot advance further if it is not approved. \[OPP\_VR02\] | The status cannot advance further if it is not approved. \[OPP\_VR02\] |
If your validation rules bypasses look like this, this is bad: `$Profile.id <> '00eo0000000KXdC' && somefield__c = 'greatvalue'`
The most maintainable way to create a bypass is to create a hierarchical custom setting to store all the bypass values that you will use. This means that the custom setting should be named "Bypasses", and contain one field per bypass type that you want to do.Great-looking bypass setup right there
This setup allows you to create bypasses either by profile or by user, like so:  This allows you to reference bypasses directly in your validation rules by simply letting Salesforce handle knowing whether or not the bypass is active or not for that specific user or profile. In the validations themselves, using this bypass is as easy as referencing it via the formula builder. An example for validation rules could be: `!$Setup.Bypasses__c.IsBypassVR__c && Name = "test"` As you can also see in the above screenshots, you can use a single custom setting to host all the bypasses for your organization, including Validation Rules, Workflows, Triggers, etc. As additional examples, you can see that "Bypass Emails" and "Bypass Invoicing Process" are also present in the custom setting - adding the check for these checkboxes in the automations that trigger emails, and automations that belong to Invoicing respectively, allow partial deactivation of automations based on criteria. # Data Migration Best Practices An attempt to help you not delete your production database # 1 - Data Migrations Checklist The following is a semi-profanity-ridden attempt at explaining one way to do data migrations while following best practices. It is rather long and laced with colorful language. If you have read it already, or if you want to avoid the profanity, you can consult the following checklist in the *beautiful* table below. Note that all elements are considered mandatory. As a quick note, and a reminder even if you've read the whole version:DO NOT MODIFY SOURCE DATA FILES, EVER.
If you're doing data migrations, either use a script to modify the source files and save the edited version, or use excel workbooks that open the source file and then save the edited result elsewhere. Yes, even if the source is an excel file. Why? Because sources change. People forget stuff, files aren't well formatted, shit gets broken, and people are human - meaning that one-time data import is actually going to be done multiple times. Edit the source file, and get to do everything all over again. Use scripts or workbooks to do the transformations ? Point that to the new source file and BAM Bob's your uncle. Scripts you might want to use: - [OpenRefine](http://openrefine.org/) - [SFXD's PSCSV](https://github.com/SFXD/PSCSV) - Salesforce's official [Data Migration Tool](https://github.com/forcedotcom/Data-Migration-Tool) for cross-org data loading - [Amaxa](https://gitlab.com/davidmreed/amaxa) for related objects, done by David Reed Or, if you prefer excel, open a blank workbook, Import the source file via the "data" ribbon tab, select "from text/csv" (or whatever matches based on your source type), then save it as both: - the construction excel, - a NEW csv file after doing your changes in formula columns. That way when you change the source file you can just open the construction book again and resave.Do all records have an external Id |
Salesforce does not back up your data.
If you delete your data, and the amount deleted is bigger than what is in the recycle bin, if will be deleted forever. You could try restoring it via Workbench, praying that the automated Salesforce jobs haven't wiped your data yet. If you update data, the moment the update hits the database (the DML) is done, the old data is lost. Forever. If you don't have a backup, you could try seeing if you turned on field history. If worst comes to worst you can pay 10 000€ (not joking, see [here](https://help.salesforce.com/apex/HTViewSolution?id=000003594)) to Salesforce to restore your data. Did I mention that Salesforce would give you a CSV extract of the data you had in Salesforce ? Yeah they don't restore the org for you. You'd still need to restore it table per table with a data loading tool. But let's try to avoid these situations, by following these steps. These steps apply to any massive data load, but especially in case of deletions. ### GENERAL DATA OPERATIONS STUFF #### ToolsDo not use Data Loader if you can avoid it. If you tried doing a full data migration with Dataloader, you will not be helped. By this I mean I will laugh at you and go back to drinking coffee. Dataloader is a BAD tool.
[Amaxa](https://gitlab.com/davidmreed/amaxa) is awesome and handles objects that are related to one another. It's free and awesome. [Jitterbit](https://www.jitterbit.com/solutions/salesforce-integration/salesforce-data-loader/) is like Dataloader but better. It's free. It's getting old though, and some of the newer stuff won't work like Time fields. [Talend](https://www.talend.com/) requires some tinkering but knowing it will allow you to migrate from almost anything, to almost anything. Hell you can even use SFDX to do data migrations. But yeah don't use dataloader. Even Dataloader.io is better, and that's a paid solution. Yes I would recommend you literally pay rather than use Dataloader. If you MUST use dataloader, EXPORT THE MAPPINGS YOU ARE DOING. You can find how to do so in the data loader user guide: https://developer.salesforce.com/docs/atlas.en-us.dataLoader.meta/dataLoader/data\_loader.htmEven if you think you will do a data load only once, the reality is you will do it multiple times. Plus, for documentation, having the mapping file is best practice anyway. Always export the mapping, or make sure it is reusable without rebuilding it, whatever the tool you use.
##### ##### Volume If you are loading a big amount of data or the org is mature, read [this document](https://resources.docs.salesforce.com/sfdc/pdf/salesforce_large_data_volumes_bp.pdf) entirely before doing anything. LDV starts at a few million records in general, or several gigabytes of data. Even if you don't need this right now, reading it should be best practice in general. Yes, read the whole thing. The success of the project depends on it, and the document is quite short. #### DeletionsIf you delete data in prod without a backup, this is bad. If the data backup was not checked, this is bad. If you did not check automations before deleting, this is also bad.
Seriously, before deleting ANYTHING, EVER: - get backup - check automations - check backup is valid. #### #### Data Mapping For Admins or Consultants: you should avoid mapping the data yourself. Any data mapping you do should be with someone from the end-user's who can understand you are saying. If no one like this is available, spend time with a business operative so you can do the mapping and make them sign off on it. The client signing off on the mapping is drastically important, as this will impact the success of the data load, AND what happens if you do not successfully load it - or if the client realizes they forgot something. Basic operations for a data mapping are as follow: - study Source and target data model - establish mapping from table to table, field to field, or both if necessary. - for each table and field, establish Source of Truth, meaning which data should take priority if conflicts exist - establish an ExternalId from all systems to ensure data mapping is correct - define which users can see what data. Update permissions if needed. #### Data retrieval Data needs to be extracted from source system. This can be via API, an ETL, a simple CSV extract, etc. Note that in general it is better if storing data as CSV can be avoided - ideally you should do a point-to-point load which simply transforms the data - but as most clients can only extract csv, the following best practices apply: - Verify Data Format - Date format yyyy-mm-dd - DateTime format yyyy-mm-ddT00:00:00z - Emails not longer than 80 char - Text containing carriage returns is qualified by " - Other field-specific verifications re. length and separators for text, numbers, etc. - Verify Table integrity - Check that all tables have basic data for records: - LastName, Account for Contact - Name for Account - Any other system mandatory fields - Check that all records have the agreed-upon external Ids - Verify Parsing - Do a dummy load to ensure that the full data extracted can be mapped and parsed by the selected automation tool #### Data Matching You should already have created External Ids on every table, if you are upserting data. If not, do so now. DO NOT match the data in excel. Yes, INDEX(MATCH()) is a beautiful tool. No, no one wants you to spend hours doing that when you could be doing other stuff, like drinking a cold beer.If you're using VLOOKUP() in Excel, stop. Read up on how to use INDEX(MATCH()). You will save time, the results will be better, and you will thank yourself later. Only thing to remember is to always add "0" as a third parameter to "MATCH" so it forces exact results.
Store IDs of the external system in the target tables, in the ExternalId field. Then use that when recreating lookup relationships to find the records. This saves time, avoids you doing a wrong matching, and best of all, if the source data changes, you can just run the data load operation again on the new file, without spending hours matching IDs. # 3 - Data Migration Step-by-step - Loading #### FIRST STEPS 1. Login to Prod. Is there a weekly backup running, encoded as UTF-8, in Setup > Data Export - Nope Select encoding UTF-8 and click "Export Now". This will take hours. Turn that weekly stuff on. Make sure the client KNOWS it's on. Make sure they have a strategy for downloading the ZIP file that is generated by the extract weekly. - Yup - Is it UTF-8 and has run in the last 48 hours ? - Yup Confer with the client to see if additional backup files are needed. Otherwise, you're good. - Nope If the export isn't UTF-8, it's worthless. If it's more than 48h old, confer with the client to see if additional backup files are needed. In all cases, you should consider doing a new, manual export.SERIOUSLY MAKE SURE YOU CHANGE THE ENCODING. Salesforce has some dumb rule of not defaulting to UTF-8. YOU NEED UTF-8. Accents and ḍîáꞓȑîȶîꞓs exist. Turns out people like accents and non-roman alphabets, who knew?
- If Data Export is not an option because it has run too recently, or because the encoding was wrong, you can also do your export by using whatever too you want to Query all the relevant tables. Remember to set UTF-8 as the encoding on both export and import. 2. Check the org code and automation - Seriously, look over all triggers that can fire when you upload the data. You don't want to be that consultant that sent a notification email to 50000 people. Just check the triggers, WFs, PBs, and see what they do. If you can't read triggers, ask a dev to help you. Yes, Check the Workflows and Process Builders too. They can send Emails as well. - Check Process Builders again. Are there a lot that are firing on an object you are loading ? Make note of that for later, you may have to deactivate them. 3. Check data volume. - Is there enough space in the org to accommodate the extra data ? (this should be pre-project checks, but check it again) - Are volumes to load enough to cause problems API-call wise ? If so, you may need to consider using the BULK jobs instead of normal operations - In case data volumes are REALLY big, you will need to abide by LDV (large data volume) best practices, including not doing upserts, defering sharing calculations, and grouping records by Parent record and owner before uploading. Full list of these is available in the pdf linked above and [here](https://resources.docs.salesforce.com/sfdc/pdf/salesforce_large_data_volumes_bp.pdf). #### PREPARING THE JOBS Before creating a job, ask yourself which job type is best. Upsert is great but is very resource intensive, and is more prone to RECORD\_LOCK than other operation types. It also takes longer to complete. Maybe think about using the BULK Api. In all cases, study what operation you do and make sure it is the right one. Once that is done... You are able to create insert, upsert, query and deletion jobs, and change select parts of it. That's because you are using a real data loading tool. This is important because this means you can: - Create a new Sandbox - In whatever tool you're using, create the operations you will do, and name them so you know in which order you need to trigger them. - Prepare each job, point them to a sandbox. - Do a dummy load in sandbox. Make sure to set the start line to something near the end so you don't clog the sandbox up with all the data. - Make sure everything looks fine. If something fails, you correct the TRANSFORMATION, not the file, except in cases where it would be prohibitively long to do so. Meaning if you have to redo the load, you can run the same scripts you did before to have a nice CSV to upload. #### GETTING READY TO DO THAT DATA OPERATIONThis may sound stupid but warn your client, the PM, the end users that you're doing a data load. There's nothing worse than losing data or seeing stuff change without knowing why. Make sure key stakeholders are aware of the operation, the start time, and the estimated end time. Plus, you need them to check the data afterwards to ensure it's fine.
You've got backups of every single table in the Production org. Even if you KNOW you do, you open the backups and check they are not corrupt or unreadable. Untested backups are no backups. You know what all automations are going to do if you leave them on. You talked with the client about possible impacts, and the client is ready to check the data after you finish your operations. You set up, with the client, a timeframe in which to do the data operation. If the data operation impacts tables that users work on normally, you freeze all those users during that timeframe. Remember to deactivate any PB, WF, APEX that can impact the migration. You didn't study them just to forget them. If this is an LDV job, take into account any considerations listed above. #### DATA OPERATION 1. Go to your tool and edit the Sandbox jobs. 2. Edit the job Login to point to production 3. Save all the jobs. 4. You run, in order, the jobs you prepared. When the number of failures is low enough, study the failure files, take any corrective action necessary, then use those files as a new source for a new data load operation. Continue this loop until the number of rejects is tolerable. This will ensure that if some reason you need to redo the entire operation, you can take the same steps in a much easier fashion. Once you are done, take the failure files, study them, and prepare a recap email detailing failures and why they failed. It's their data, they have a right to know. #### POST-MIGRATION - Make sure everything looks fine, that you carried everything over. - Warn their PM that the migration is done and request testing from their side. - If you deactivated Workflows or PBs or something so the migration passes, ACTIVATE THEM BACK AGAIN. - Unfreeze users if needed.Go drink champagne.
#### #### IF SHIT DOESN'T LOOK RIGHTYou have a backup. Don't panic.
- Identify WTF is causing data to be wrong. - Fix that. - Get your backup, restore data to where it was before the fuckup. Ideally, only restore affected fields. If needed, restore everything. - Redo the data load if needed. # Getting the right (number of) Admins Salesforce Success Services Achieve Outstanding CRM Administration Because Salesforce takes care of many traditional administration tasks, system administration is easier than ever before. Setting up, customizing the application, training users, and “turning on” the new features that become available with each release—all are just a few clicks away. The person responsible for these tasks is your Salesforce CRM administrator. Because this person is one of the most important resources in making your implementation a success, it’s important to carefully choose your administrator and to continually invest in his or her professional development. You can also choose to have Salesforce handle administrator tasks for you. Note: Larger enterprise implementations often use a role called Business Analyst or Business Application Manager as well, particularly for planning the implementation and ensuring adoption once the solution is live. Although the most common customization tasks don’t require coding, you may want to consider using a professional developer for some custom development tasks, such as writing Force.com code (Apex), developing custom user interfaces with Force.com pages (Visualforce), or completing complex integration or data migration tasks. In many ways, the administrator fills the role played by traditional IT departments: answering user questions, working with key stakeholders to determine requirements, customizing the application to appeal to users, setting up reporting and dashboards to keep managers happy, keeping an eye on availability and performance, activating the features in new releases, and much more. This paper will help you to make important choices when it comes to administering your Salesforce CRM application, including: Finding the right person(s) Investing in your administrator(s) Providing adequate staffing Getting help from Salesforce Find the right administrator Who would make an ideal Salesforce CRM administrator? Experience shows that successful administrators can come from a variety of backgrounds, including sales, sales operations, marketing, support, channel management, and IT. A technical background may be helpful, but is not necessary. What matters most is that your administrator is thoroughly familiar with the customization capabilities of the application and responsive to your users. Here are some qualities to look for in an administrator: A solid understanding of your business processes Knowledge of the organizational structure and culture to help build relationships with key groups Excellent communication, motivational, and presentation skills The desire to be the voice of the user in communicating with management Analytical skills to respond to requested changes and identify customizations Invest in your administrator Investing in your administrator will do wonders for your Salesforce CRM solution. With an administrator who is thoroughly familiar with Salesforce CRM, you’ll ensure that your data is safe, your users are productive, and you get the most from your solution. Salesforce offers both self-paced training and classroom training for administrators. For a list of free, self-paced courses, go to Salesforce Training & Certification. To ensure that your administrator is fully trained on all aspects of security, user management, data management, and the latest Salesforce CRM features, enrol your administrator in Administration Essentials (ADM201). The price of this course includes the cost of the certification that qualifies your administrators to become Salesforce.com Certified Administrators. For experienced administrators, Salesforce offers the Administration Essentials for Experienced Admins (ADM211) course. Providing adequate staffing The number of administrators (and, optionally, business analysts) required depends on the size of your business, the complexity of your implementation, the volume of user requests, and so on. One common approach for estimating the number of administrators you need is based on the number of users.**Number of users** | **Administration resources** |
1 – 30 users | < 1 full-time administrator |
31 – 74 users | 1+ full-time administrator |
75 – 149 users | 1 senior administrator; 1 junior administrator |
140 – 499 users | 1 business analyst, 2–4 administrators |
500 – 750 users | 1–2 business analysts, 2–4 administrators |
> 750 users | Depends on a variety of factors |
**Process Builder is old, decrepit, and deprecated.** **You can't create new ones, and if you're editing old ones you should be migrating to Flows instead.** **This is ARCHIVED content, will never be updated, and is here for history reasons.**
Normally we put bypasses in everything (workflows, validation rules, etc). Process builders especially are interesting to bypass because they're still SLOW AS HELL and they can be prone to unforeseen errors - specifically during data loads. Plus if you have process builders sending emails you probably want to skip them when you're loading data massively. A few years ago I didn't find a solution that suited me. A yea or so they activated systems labels for PB, so you can search for the custom setting like in WF - but you couldn't go next element, so you had to add the bypass, in formula mode, to every element. Taxing and costly in hours, plus you had to use formulas to everything. Here you set it once, in every TPB, and then you have a working bypass for every process builder ever. Low cost, easy to maintain, and allows deactivation on mass loads or other operations where you don't want those things firing. Ok so there's the usual, recommended Bypass Custom setting that I write about in my best practices. I added a PB there  I created a notification type which then allows you to do this:  I would rather it's "no action" but that doesn't exist. so in the meantime, this: - allows a full bypass at the Triggering PB level, so only one PB gets evaluated, and you don't need to add bypasses in any other PB or even any other decision diamonds - so effectively one bypass, or maybe two, per Object. - doesn't touch the record, so effectively does a full bypass - has this semi-annoying notification which is both a blessing and a curse, but I think they have "no action" on the roadmap. # ARCHIVED - Process Builder Structural Conventions**Process Builder is old, decrepit, and deprecated.** **You can't create new ones, and if you're editing old ones you should be migrating to Flows instead.** **This is ARCHIVED content, will never be updated, and is here for history reasons.**
## General Conventions 1\. If there are APEX triggers firing on an object, Process Builder SHOULD NOT be used. \*1 2\. If Process Builders existed before building the APEX triggers, the Process Builders SHOULD be replaced by APEX triggers and classes. 3\. Process Builders REALLY SHOULD NOT fire on, update, or otherwise reference, Person Accounts. 4\. Process Builders REALLY SHOULD NOT perform complex operations on records that can be massively inserted/updated as a routine part of organization usage. 5\. Process Builders MUST NOT call a Flow if firing on an object that can be massively inserted/updated as a routine part of organization usage. 6\. Process Builders execution SHOULD be limited to the exact cases where they are needed In all cases, a consultant SHOULD limit the number of process builders executing on an object. ## Structural Conventions 1\. Generally, a consultant SHOULD build Invocable Process Builders, and Invoke them from one single Process on the triggering Object. ❍ This is by opposition to creating one process builder by task. ❍ Invocable process builders cannot be used to trigger time-dependent actions, meaning you will probably end up with: - one PB for create actions - one PB for create/edit - one PB for just time-dependent stuff 2\. Process Builders generally SHOULD NOT use the "no criteria" option of the Decision Diamonds. There is always at least one sanity check to do 3\. Whenever possible, multiple Process Builders on an object should be migrated to a single Process Builder, with different actions evaluated one after the other. This is now officially mandated by Salesforce. *\*1 This is a best practice, but it should be noted that for smaller organizations, triggers* *and process builders may coexist on the same objects.* # ARCHIVED - Process Builder Naming Conventions**Process Builder is old, decrepit, and deprecated.** **You can't create new ones, and if you're editing old ones you should be migrating to Flows instead.** **This is ARCHIVED content, will never be updated, and is here for history reasons.**
1. A Process Builder name ***SHALL*** always start by `PB`, followed by a number corresponding to the number of process builders in the Organization, followed by an underscore. a. If the Process Builder Triggers other Process Builders, it ***SHALL*** always start by `TPB` instead. b. If the Process Builder is Invoked by other Process Builders, it ***SHALL*** always start by `IPB` instead. 2. The end of a Process Builder name ***SHOULD*** always be: - the name of the object, in the case of a Triggering Process Builder (TPB) - the action carried out, in the case of an Invoked Process Builder (IPB) - the trigger and action, in the case of a standalone Process Builder (PB) 3. A Process Builder name ***COULD*** contain either `C`, `CE`, or `CES` wrapped by underscores, to show if the PB triggers on Creation, Creation and Edition, or Subsequent Modifications that Fill Criteria. The default assumed setting is `CE` if none is written. *\*3* 4. All Process Builder Triggers MUST have a description detailing their purpose. 5. A Process Builder Decision Diamond ***SHALL*** be named after the criteria that are used in the most precise manner possible. 6. A Process Builder Action ***SHALL*** be named after the action being carried out in the most precise manner possible.Type | Name | Description |
---|---|---|
Process Builder | TPB01\_Opportunity | This Process Builder invokes all invocable Opportunity Process builders |
Process Builder | IPB01\_SetOwnerTarget | Copies over target from Owner to calculate monthly efficiency |
Process Builder | PB01\_ContactBirthdatEmail | Sends a birthday email on the contact’s birthday. |
Decision Diamond | Status is “Approved” | \#N/A |
Action | Sets Contact Scoring to 10 | \#N/A |
Process Builder (possible variation) | TPB01\_Opportunity | This Process Builder invokes all invocable Opportunity Process builders. Also Handles various actions such as birthday emails. |
It is very important to note that Flows have almost nothing to do, complexity-wise, with Workflows, Process Builder, or Approval Processes. Where the old tools did a lot of (over)-simplifying for you, Flow exposes a lot of things that you quite simply never had to think about before, such as execution context, DML optimization, batching, variables, variable passing, etc. So if you are an old-timer upgrading your skills, note that **a basic understanding of programming (batch scripting is more than enough) helps a lot with Flow**. If you're a newcomer to Salesforce and you're looking to learn Flow, same comment - this is harder than most of the platform (apart from Permissions) to learn and manipulate. This is normal.
##### **Intended Audience** These conventions are written for all types of Salesforce professionals to read, but the target audience is the administrator of an organization. If you are an ISV, you will have considerations regarding packaging that we do not, and if you are a consultant, you should ideally use whatever the client wants (or the most stringent convention available to you, to guarantee quality). ##### **On Conventions** As long as we're doing notes: conventions are opinionated, and these are no different. Much like you have different APEX trigger frameworks, you'll find different conventions for Flow. These specific conventions are made to be maintainable at scale, with an ease of modification and upgrade. This means that they by nature include boilerplate that you might find redundant, and specify very strongly elements (to optimize cases where you have hundreds of Flows in an organization). **This does not mean you need to follow everything.** A reader should try to understand *why* the conventions are a specific way, and then decide whether or not this applies to their org.At the end of the day, as long as you use **any** convention in your organization, we're good. This one, another one, a partial one, doesn't matter. Just structure your flows and elements.
##### **On our Notation** Finally, regarding the naming of sub-elements in the Flows: we've had conversations in the past about the pseudo-[hungarian notation](https://en.wikipedia.org/wiki/Hungarian_notation) that we recommend using. To clarify: we don't want to use Hungarian notation. We do so because Flow still doesn't split naming schemes between variables, screen elements, or data manipulation elements. This basically forces you to use Hungarian notation so you can have a `var_boolUserAccept` and a `S01_choiceUserAccept` (a variable holding the result of whether a user accepts some conditions, and the presentation in radio buttons of said acceptance), because you can't have two elements just named `UserAccept` even if technically they're different. ##### **On custom code, plugins, and unofficialSF** On another note: Flow allows you to use custom code to extend its functionality. We define "custom code" by any LWC, APEX Class, and associated that are written by a human and plug into flow. We recommend using as little of these elements as possible, and as many as needed. **This includes UnofficialSF**.Whether you code stuff yourself, or someone else does it for you, Custom Code always requires audit and maintenance. Deploying UnofficialSF code to your org basically means that you own the maintenance and audit of it, much like if you had developed it yourself. We emit the same reservations as using any piece of code on GitHub - if you don't know what it does **exactly**, you shouldn't be using it. This is because any third-party code is **not part of your MSA with Salesforce, and if it breaks, is a vector of attack, or otherwise negatively impacts your business, you have no official support or recourse.** This is not to say that these things are not great, or value-adding - but you are (probably) an admin of a company CRM, which means your first consideration should be **user data and compliance**, and ease of use coming second.
*Bonus useless knowledge: Flows themselves are just an old technology that Salesforce released in 2010: Visual Process Manager. That itself is actually just a scripting language: “The technology powering the Visual Process Manager is based on technology acquired from Informavores, a call scripting startup Salesforce bought last year.” (2009) Source* # What Automation do I create Flowchart [](https://wiki.sfxd.org/uploads/images/gallery/2021-01/image-1610555495687.png) # Flow Meta Conventions ## Read these Resources first 1. The official [Flows best practice doc](https://help.salesforce.com/articleView?id=flow_prep_bestpractices.htm&type=0). Note we agree on most things. Specifically the need to plan out your Flow first. 2. The [Flows limits doc](https://help.salesforce.com/articleView?id=flow_considerations_limit.htm&type=0). If you don't know the platform limits, how can you build around them? 3. The [Transactions limits doc](https://help.salesforce.com/articleView?id=flow_considerations_limit_transaction.htm&type=0). Same as above, gotta know limits to play around them. 4. [The What Automation Do I Create Flowchart](https://wiki.sfxd.org/books/best-practices/page/what-automation-do-i-create-flowchart). Not everything needs to be a Flow. 5. [The Record-Triggered Automation Guide](https://architect.salesforce.com/decision-guides/trigger-automation), if applicable. ## Best Practices These are general best practices that do not pertain to individual flows but more to Flows in regards to their usage within an Organization. #### **On Permissions** Flows should **ALWAYS** execute in the smallest amount of permissions possible for it to execute a task. Users should also ideally not have access to Flows they don't require. Giving Setup access so someone can access `DeveloperName` is bad, and you should be using custom labels to store the ids and reference that instead, just to limit setup access.**Use System mode sparingly. It is dangerous.** If used in a Communities setting, I REALLY hope you know why you're exposing data publicly over the internet or that you're only committing information with no GETs.
Users can have access granted to specific Flows via their Profiles and Permission Sets, which you should really be using to ensure that normal users can't use the Flow that massively updates the client base for example.
#### **Record-Triggered Flows, and Triggers should ideally not coexist on the same object in the same Context.**"Context" here means the [APEX Trigger Context](https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_triggers_context_variables.htm). Note that not all of these contexts are exposed in Flow: **- Screen Flows** execute outside of these contexts, but Update elements do not allow you to carry out operations in the `before` context. **- Record Triggered Flow** execute either in `before` or `after` contexts, depending on what you chose at the Flow creation screen (they are named "Fast Record Updates" and "Other Objects and Related Actions", respectively, because it seems Salesforce and I disagree that training people on proper understanding of how the platform works is important).
The reason for the "same context" exclusivity is in case of multiple Flows and heavy custom APEX logic: in short, unless you plan *explicitly* for it, the presence of one or the other forces you to audit both in case of additional development, or routine maintenance. You could technically leverage Flows and APEX perfectly fine together, but if you have a `before` Flow and a `before` Trigger both doing updates to fields, and you accidentally reference a field in both... debugging that is going to be fun. So if you start relying on APEX Triggers, while this doesn’t mean you have to change all the Flows to APEX logic straight away, it does mean you need to plan for a migration path.In the case were some automations need to be admin editable but other automations require custom code, you should be migrating the Triggers to APEX, and leveraging sub-flows which get called from your APEX logic.
#### **Flow List Views should be used to sort and manage access to your Flows easily** The default list view is not as useful as others can be. We generally suggest doing at minimum one list view, and two if you have installed packages that ship Flows: - - One List View that shows all active flows, with the following fields displayed: `Active, Is Using an Older Version, Triggering Object or Platform Event Label, Process Type, Trigger, Flow Label, Flow API Name, Flow Description, Last Modified Date` This will allow you to easily find your flows by Object (apart from Scheduled Flows or Sub-Flows, but this is handled via Naming Conventions), see if you started working on a Flow but didn't activate the last version, and view the beautiful descriptions that you will have set everywhere. - One List View that shows all Package flows, which contains `Active, Is Using an Older Version, Flow Namespace, Overridable, Overridden By, Overrides` This allows you to easily manage your updates to these Flows that are sourced from outside your organization. #### **Flows are considered Code for maintenance purposes**Do NOT create or edit Flows in Production, especially a Record-Triggered flow. If any user does a data load operation and you corrupt swaths of data, you will know the meaning of “getting gray hairs”, unless you have a backup - which I am guessing you will not have if you were doing live edits in production.
No, this isn't a second helping of our note in the [General Notes](https://wiki.sfxd.org/books/best-practices/page/flow-general-notes). This is about your Flows - the ones you built, the ones you know very well and are proud of. There are a **swath** of reasons to consider Flows to be Code for maintenance purposes, but in short: - if you're tired, mess up, or are otherwise wrong, Production updates of Flows can have HUGE repercussions depending on how many users are using the platform, and how impactful your Flow is - updating Flows in Production will break your deployment lifecycle, and cause problems in CI/CD tools if you use them - updating Flows in Production means that you have no safe reproducibility environment unless you refresh a sandbox - unless you know every interaction with other parts of the system, a minor update can have impact due to other automation - whether it be APEX, or other Flows. In short - it's a short and admin-friendly development, but it's still development. #### **On which automation to create** In addition to our (frankly not very beautiful Flowchart), when creating automations, the order of priority should be: - 1. **Object-bound, BEFORE Flows** These are the most CPU-efficient Flows to create. They should be used to set information that is required on Objects that are created or updated. 2. **User-bound Flows** Meaning Screen flows. These aren’t tied to automation, and so are very CPU efficient and testable. 3. **Object-bound, Scheduled Flows** If you can, create your flows as a Schedule rather than something that will spend a lot of time waiting for an action - a great example of this are scheduled emails one month after something happens. Do test your batch before deploying it, though. 4. **Object-bound, AFTER Flows** These are last because they are CPU intensive, can cause recursion, and generally can have more impact in the org than other sources of automation. #### **On APEX and LWCs in Flows** - **APEX or LWCs that are specifically made to be called from Flows should be clearly named and defined in a way that makes their identification and maintenance easier.** - **Flows that call APEX or LWCs are subject to more limits and potential bugs than fully declarative ones.** When planning to do one, factor in the maintenance cost of these parts. Yes, this absolutely includes actions and components from the wonderful UnofficialSF. If you install unpackaged code in your organization, YOU are responsible for maintaining it. - On a related note, **Don't use non-official components without checking their limits.** Yes UnofficialSF is great, and it also contains components that are not bulkified or contain bugs.To reiterate, if you install unpackaged code in your organization, YOU are responsible for maintaining it.
#### **Flow Testing and Flow Tests** If at all possible, **Flows should be [Tested](https://help.salesforce.com/s/articleView?id=sf.flow_test.htm&type=5)**. This isn't always possible because of [these considerations](https://help.salesforce.com/s/articleView?id=sf.flow_considerations_feature_testing.htm&type=5), (which aren't actually exhaustive - I have personally seen edge cases where Tests fail but actual function runs, because of the way Tests are build, and I have also seen deployment errors linked to Tests). [Trailheads exist to help you get there](https://trailhead.salesforce.com/content/learn/modules/flow-testing-and-distribution/make-sure-your-flow-works).A Flow Test is **not** just a way to check your Flow works. A proper test should: - Test the Flow works - Test the Flow works in other Permission situations - Test the Flow **doesn't** work in critical situations you want to avoid \[if you're supposed to send one email, you should *probably catch the situation where you're sending 5* mil\] ... and in addition to that, a proper Flow Test will warn you **if things stop working down the line.** Most of these boilerplates are *negative bets against the future* - we are expecting things to break, people to forget configuration, and updates to be made out of process. Tests are a way to mitigate that.
We currently consider Flow Tests to be "acceptable but still bad", which we expect to change as time goes on, but as it's not a critical feature, we aren't sure when they'll address the current issues with the tool.
Note that proper Flow Testing will probably become a requirement at some point down the line. #### **On Bypasses** Flows, like many things in Salesforce, can be configured to respect [Bypasses](https://wiki.sfxd.org/books/best-practices/page/bypasses). In the case of Flows, you might want to call these "[feature flags](https://developer.salesforce.com/docs/atlas.en-us.packagingGuide.meta/packagingGuide/fma_best_practices.htm)". This is a GREAT best practice, but is generally overkill unless you are a very mature org with huge amounts of processes. # Flow Structural Conventions - Common Core As detailed in the General Notes section, these conventions are heavily opinionated towards maintenance and scaling in large organizations. The conventions contain: - a "common core" set of structural conventions that apply everywhere (this page!) - conventions for [Record Triggered Flows](https://wiki.sfxd.org/books/best-practices/page/flow-structural-conventions-record-triggered) specifically - conventions for [Scheduled Flows](https://wiki.sfxd.org/books/best-practices/page/flow-structural-conventions-scheduled) specifically Due to their nature of being triggered by the user and outside of a specific record context, Screen Flows do not require specific structural adaptations at the moment that are not part of the common core specifications. ## Common Core Conventions #### On System-Level Design ##### **Do not do DMLs or Queries in Loops.****Simpler: No pink squares in loops.**
**[DML](https://developer.salesforce.com/docs/atlas.en-us.apexref.meta/apexref/apex_dml_section.htm)** is Data Manipulation Language. Basically it is what tells the database to change stuff. DML Operations include Insert, Update, Upsert, and Delete, which you should know from Data Loader or other such tools.
Salesforce now actually warns you when you're doing this, but it still bears saying. *Don't do this*Error, or Fault Paths, are available both in Free Design mode and the Auto-Layout Mode. In Free mode, you need to handle all possible other paths before the Fault path becomes available. In Auto-Layout mode, you can simply select Fault Path.
Screen Flow? Throw a Screen, and display what situation could lead to this. Maybe also send the Admin an email explaining what happened. Record-triggered Flow? [Throw an email](https://help.salesforce.com/s/articleView?id=sf.flow_ref_elements_actions_sendemail.htm&type=5) to the APEX Email Exception recipients, or emit a [Custom Notification](https://help.salesforce.com/s/articleView?id=sf.notif_builder_custom.htm&language=en_US&type=5). Hell, better yet throw that logic into a Subflow and call it from wherever.(Note that if you are in a sandbox with email deliverability set to System Only, regular flow emails and email alerts will not get sent.)
[](https://wiki.sfxd.org/uploads/images/gallery/2022-02/image-1644417882117.png)Handling Errors this way allows you to: - not have your users presented with UNEXPECTED EXCEPTION - YOUR ADMIN DID THINGS BADLY - maybe deflect a few error messages, in case some things can be fixed by the user doing things differently - have a better understanding of how often Errors happen.
You want to supercharge your error handling? Audit [Nebula Logger](https://github.com/jongpie/NebulaLogger) to see if it can suit your needs. With proper implementation (and knowledge of how to service it, remember that installed code is still code that requires maintenance), Nebula Logger will allow you to centralize **all** logs in your organization, and have proper notification when something happens - whether in Flow, APEX, or whatever. ##### **Don't exit loops based on decision checks** The Flow engine doesn't support that well and you will have weird and confusing issues if you ever go back to the main loop.  *Don’t do this either - always finish the loop* Issues include variables not being reset, DML errors if you do come back to the loop, and all around general unpredictable situations. You *can* still do this if you absolutely NEVER come back to the loop, but it's bad design. ##### **Do not design Flows that will have long Wait elements**This is often done by Admins coming from Workflow or Process Builder space, where you could just say "do that 1 week before contract end date" or "1 day after Opportunity closure". This design is sadly as outdated as the tools that permitted it. Doing this will have you exceed your Paused Interview limits, and actions just won't be carried out.
A proper handling of "1 day before/after whenever", in Flow, is often via a Scheduled Flow. Scheduled Flows execute once daily (or more if you use plugins to allow it), check conditions, and execute based on these conditions. In the above case, you would be creating a Scheduled Flow that : - Queries all Contract that have an End Date at `TODAY()-7` - Proceeds to loop over them and do whatever you need it toDespite it not being evident in the Salesforce Builder, there is a VERY big difference between the criteria in the Schedule Flow execution start, and an initial GET. - Putting criteria in the Start Element has less conditions available, but effectively limits the scope of the Flow to only these records, which is great in **big environments**. It also fires **One Flow Interview per Record**, and then bulkifies operations at the end - so doing a **GET** if you put a criteria in the Start element should be done after due consideration only. - On the opposite, putting no criteria and relying on an initial Get does a single Flow Interview, and so will run less effectively on huge amounts of records, *but* does allow you to handle more complex selection criteria.
##### **Do not Over-Optimize your Flows** When Admins start becoming great at Flows, everything looks like a Flow. The issue with that is that sometimes, Admins will start building Flows that shouldn't be built because Users should be using standard features (yes, I know, convincing Users to change habits can be nigh impossible but is sometimes still the right path)... and sometimes, they will keep at building Flows that just should be APEX instead.If you are starting to hit CPU timeout errors, Flow Element Count errors, huge amounts of slowness... You're probably trying to shove things in Flow that should be something else instead.
APEX has more tools than Flows, as do LWCs. Sometimes, admitting that Development is necessary is not a failure - it's just good design. #### On Flow-Specific Design ##### **Flows should have one easily identifiable Triggering Element** This relates to the [Naming Conventions](https://wiki.sfxd.org/books/best-practices/page/flow-naming-conventions).**Flow Type** | **Triggering Element** |
Record-Triggered Flows | It is the Record that triggers the DML |
Event-based Flows | It should be a single event, as simple as possible. |
Screen Flows | This should be either a single recordId, a single sObject variable, or a single sObject list variable. In all cases, the Flow that is being called should query what it needs by itself, and output whatever is needed in its context. |
Subflows | The rule can vary - it can be useful to pass multiple collections to a Subflow in order to avoid recurring queries on the same object. However, passing multiple single-record variables, or single text variables, to a Subflow generally indicates a design that is overly coupled with the main flow and should be more abstracted. |
In the example below, the Pricebook2Id variable should be taken from the Order variable.
[](https://wiki.sfxd.org/uploads/images/gallery/2022-02/image-1644417918698.png) ##### **Try to make Subflows that are reusable as possible**. A Subflow that does a lot of different actions will probably be single-use, and if you need a subpart of it in another logic, you will probably build it again, which may lead to higher technical debt. If at all possible, each Subflow should execute a single function, within a single Domain. Yes, this ties into "[service-based architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)" - we did say Flows were code. ##### **Do not rely on implicit references** This is when you query a record, then fetch parent information via {MyRecord.ParentRecord\_\_c.SomeField\_\_c}. While this is *useful*, it’s also very prone to errors (specifically with fields like RecordType ) and makes for wonky error messages if the User does not have access to one of the intermediary records. Do an explicit Query instead if possible, even if it is technically slower. ##### **Tie each Flow to a Domain** This is also tied to Naming Conventions. Note that in the example below, the Domain is the Object that the Flow lives on. One might say it is redundant with the Triggering Object, except Scheduled Flows and Screen Flows don't have this populated, and are often still linked to specific objects, hence the explicit link. Domains are definable as **Stand-alone groupings of function which have a clear Responsible** [**Persona**](https://trailhead.salesforce.com/content/learn/modules/ux-personas-for-salesforce/get_started_with_personas)**.** **[](https://wiki.sfxd.org/uploads/images/gallery/2022-02/image-1644417943258.png)** ##### **Communication between Domains should ideally be handled via Events** In short, if a Flow starts in Sales (actions that are taken when an Opportunity closes for example) and finishes in Invoicing (creates an invoice and notifies the people responsible for those invoices), this should be two separate Flows, each tied to a single Domain.Note that the Salesforce Event bus is mostly built for External Integrations. The amount of events we specify here is quite high, and as such on gigantic organisations it might not be best practice to handle things this way - you might want to rely on an external event bus instead. That being said if you are in fact an enterprise admin I expect you are considering the best usecase in every practice you implement, and as such this disclaimer is unnecessary.
[](https://wiki.sfxd.org/uploads/images/gallery/2022-02/image-1644417931475.png) *Example of Event-Driven decoupling* ##### **Avoid cascading Subflows wherein one calls another one that call another one** Unless the secondary subflows are basically fully abstract methods handling inputs from any possible Flow (like one that returns a collection from a multipicklist), you're adding complexity in maintenance which will be costly # Flow Structural Conventions - Record-Triggered As detailed in the General Notes section, these conventions are heavily opinionated towards maintenance and scaling in large organizations. The conventions contain: - a "[common core](https://wiki.sfxd.org/books/best-practices/page/flow-structural-conventions-common-core)" set of structural conventions that apply everywhere - conventions for Record Triggered Flows specifically (this page!) - conventions for [Scheduled Flows](https://wiki.sfxd.org/books/best-practices/page/flow-structural-conventions-scheduled) specificallyThis page directly changes conventions that were emitted by SFXD in 2019, and reiterated in 2021. This is because the platform has changed since then, and as such we are recommending new, better, more robust way to build stuff. If you recently used our old guides - they are still fine, we just consider this new version to be better practice.
## Record-Triggered Flow Design #### Before Creating a Flow ##### Ensure there are no sources of automation touching the Object or Fields If the same field is updated in another automation, default to that automation instead, or refactor that automation to Flow. If the Object is used in other sources of automation, you might want to default to that as well, or refactor that automation to Flow, unless you can ensure that both that source of automation and the Flow you will create will not cross-impact each other.You can leverage "where is this used" in sandbox orgs to check if a field is already referenced in a Flow - or take the HULK SMASH approach and just create a new sandbox, and try to delete the field. If it fails deletion, it'll tell you where it is referenced.
##### Verify the list of existing Flows and Entry Criterias you have You don't want to have multiple sources of the same entry criteria in Flows because it will make management harder, and you also don't want to have multiple Flows that do almost the same thing because of scale.Identifying if you can refactor a Flow into a Subflow that will be called from multiple places is best done before trying to build anything.
##### Ask yourself if it can't be a Scheduled Flow instead Anything date based, anything that has wait times, anything that doesn't need to be at the instant the record changes status but can instead wait a few hours for the flow to run - all these things can be scheduled Flows. This will allow you to have better save times on records. ##### Prioritize BEFORE-save operations whenever possible This is more efficient in every way for the database, and avoids recurring SAVE operations. It also mostly avoid impacts from other automation sources (apart from Before-Save APEX). Designing your Flow to have the most possible before-save elements will save you time and effort in the long run. ##### Check if you need to update your bypasses Specifically for Emails, using [bypasses](https://wiki.sfxd.org/books/best-practices/page/bypasses) remains something that is important. Because sending emails to your entire database when you're testing stuff is probably not what you want. ##### Consider the worst case Do not build your system for the best user but the worst one. Ensure that faults are handled, ensure that a suser subject to every single piece of automation still has a usable system, etc. #### On the number of Flows per Object and Start Elements - **Before-Save Flows** Use **as many before-save flows as you require**. You *should*, but do not *have to*, set Entry Conditions on your Flows. Each individual Flow should be tied to a **functional Domain**, or a specific **user story**, as you see most logical. The order of the Flows in the Flow Trigger Explorer should not matter, as a single field should **never** be referenced in multiple before save flows as the target of an assignment or update. - **After-Save Flows **Use **one** Flow for actions that should trigger without entry criteria, and orchestrate them with Decision elements. Use **one** Flow to handle **Email Sends** if you have multiple email actions on the Object and need to orchestrate them. Use **as many additional flows as you require, as long as they are tied to unique Entry Criteria**. Set the Order of the Flows **manually in the Flow Trigger Explorer** to ensure you know how these elements chain together.Offload any computationally complex operation that **doesnt need to be done immediately to a scheduled** path.Entry Criteria specify when a Flow is *evaluated*. It is a very efficient way to avoid Flows triggering unduly and saves a lot of CPU time. Entry Criteria however do require knowledge of Formulas to use fully (the basic "AND" condition doesn't allow a few things that the Formula editor does in fact handle properly), and it is important to note that the entire Flow does not execute if the Entry Criteria isn't met, so you can't catch errors or anything.
To build open what's written above: - Before-Save flows are very fast, generally have no impact on performance unless you do very weird stuff, and should be easy to maintain as log as you name them properly, even if you have multiple per object. "Tieing" a flow to a Domain or Object means by its name and structure. You can technically do a Flow that does updates both for Sales and Invoicing, but this is generally meh if you need to update a specific function down the line.Logical separation of responsabilities is a topic you'll find not only here but also in a lot of development books.
Before-Save Flows don't actually require an Update element - this is just for show and to allow people to feel more comfortable with it. You can technically just use Assignments to manipulate the `$Record` variable with the same effect. *It actually used to be the only way to do before-save, but was thought too confusing.*
- After-Save flows, while more powerful, require you to do another DML operation to commit anything you are modifying. This has a few impacts, such as the possibility to re-run automtions if you update the record that already triggered your automation. The suggestions we make above are based on the following: - Few actions on records should not have entry criteria set. This allows more flows to be present on each object without slowdowns. The limit of One flow is because it should pretty much not exist, or be small. - Emails sent from Objects are always stress inducing in case of data loads, and while a proper bypass usage does not require grouping all emails in a Flow,knowing that all email alerts are in a specific place does make maintenance easier. - Entry-Criteria filtered Flows are quite efficient, and so do not need to be restricted in number anymore. - Ordering Flows manually is to avoid cases where the order of Flows is unkown, and interaction between Flows that you ahve not identified yields either positive or negative results that can't be reproduced without proper ordering. - Scheduled Paths are great if you are updating related Objects, sending notifications, or doing any other operation that isn't time-sensitive for the user.We used to recommend a single Flow per context. This is obviously no longer the case. This is because anything that pattern provided, other tools now provide, and do better. The "One flow per Object pattern" was born because: - Flows only triggered in `after` contexts - Flows didn't have a way to be orchestrated between themselves - Performance impact of Flows was huge because of the lack of entry criteria None of that is true anymore. The remnant of that pattern still exists in the "no entry criteria, after context, flow that has decision nodes", so it's not completely gone. So while the advent of Flow Trigger Explorer was one nail in the coffin for that pattern, the real final one was actual good entry criteria logic.
Entry Criteria are awesome but are not properly disclosed either in the Flow List View, nor the Start Element. Ensure that you follow proper Description filling so you can in fact know how these elements work, otherwise you will need to open every single Flow to check what is happening.
## On Delayed Actions Flows allows you to do complex queries and loops as well as schedules. As such, there is virtually no reason to use wait elements or delayed actions, unless said waits are for a platform event, or the delayed actions are relatively short. Any action that is scheduled for a month in the future for example should instead set a flag on the record, and let a Scheduled Flow evaluate the records daily to see if they fit criteria for processing. If they do in fact fit criteria, then execute the action. A great example of this is Birthday emails - instead of triggering an action that waits for a year, do a Scheduled flow running daily on contacts who's birthday it is. This makes it a lot easier to debug and see what’s going on. # Flow Structural Conventions - Scheduled As detailed in the General Notes section, these conventions are heavily opinionated towards maintenance and scaling in large organizations. The conventions contain: - a "[common core](https://wiki.sfxd.org/books/best-practices/page/flow-structural-conventions-common-core)" set of structural conventions that apply everywhere - conventions for [Record Triggered Flows ](https://wiki.sfxd.org/books/best-practices/page/flow-structural-conventions-record-triggered)specifically - conventions for Scheduled Flows specifically (this page!) ## Scheduled Flow Design As detailed in the Common Core conventions, despite it not being evident in the Salesforce Builder, there is a VERY big difference between the criteria in the Schedule Flow execution start, and an initial GET element in a Scheduled Flow that has no Object defined. \- Putting criteria in the Start Element has less conditions available, but effectively limits the scope of the Flow to only these records, which is great in **big environments**. It also fires **One Flow Interview per Record**, and then bulkifies operations at the end. An often-done mistake is to do the above selection, say "Accounts where Active = TRUE" for example, and then doing a Get Records afterwards, querying the accounts again, because of habits tied to Record-Triggered Flows. If you do this, you are effectively querying the entire list of Accounts X times, where X is the number of Accounts in your original criteria. Which is bad.
\- On the opposite, putting no criteria and relying on an initial Get does a single Flow Interview, and so will run less effectively on huge amounts of records, *but* does allow you to handle more complex selection criteria. In the first case, you should consider that there is only one record selected by the Flow, which is populated in `$Record` - much like in Record-Triggered Flows. In the second screenshot, you can see that the Choose Object is empty, but the GET is done afterwards - `$Record` is as such empty, but the Get Active Accounts will generate a collection variable containing multiple accounts, which you will need to iterate over (via a `loop` element) to handle the different cases
# Flow Naming Conventions ## Meta-Flow Naming 1. A Flow name shall always start by the name of the **Domain from which it originates**, followed by an underscore. **In most cases, for Flows, the Domain is equivalent to the Object that it is hosted on**. As per structural conventions, cross-object Flows should be avoided and reliance on Events to synchronize flows that do cross-object operations should be used.In `Account_BeforeSave_SetClientNumber`, the Domain is Account, as this is where the automation is started. It could also be something like `AccountManagement` , if the Account Management team owned the process for example.
2. The Domain of the shall be followed by a code indicating the type of the Flow, respecting the cases as follows: 1. If the flow is a Screen Flow, the code shall be **SCR**. 2. If the flow is a SubFlow, the code shall be **SFL**. 3. If the flow is specifically designed to be a scheduled flow that runs on a schedule, the code shall be **SCH**. 4. If the flow is a Record Triggered flow, the code shall instead indicate the contexts in where the Record Triggered Flow executes. In addition, the flow name shall contain the context of execution, meaning either **Before**or **After**, followed by either **Create**, **Update** or **Delete**. 5. If the flow is an Event Triggered flow, the code shall be **EVT** instead. 6. If the flow is specifically designed to be a Record Triggered flow that ONLY handles email sends, the code shall be **EML** instead.In `Account_AftercreateAftersave_StatusUpdateActions`, you identify that it is Record-Triggered, execute both on creation and update, in the After Context, and that it carries out actions related to when the entry criteria (the status has changed) are met.
In the case of `Invoice_SCR_CheckTaxExemption`, you know that it is a Screen Flow, executing from the Invoice Lightning Page, that handles Tax Exemption related matters.
3. A Flow name shall further be named after the action being carried out in the most precise manner possible. For Record Triggered Flows, this is limited to what triggers it. See example table for details. 4. A Flow Description should always indicate what the Flow requires to run, what the entry criteria are, what it does functionally, and what it outputs.Type | Name | Description |
---|---|---|
Screen Flow | Quote\_SCR\_addQuoteLines | \[Entry = None\] A Screen flow that is used to override the Quote Lines addition page. Provides function related to Discount calculation based on Discounts\_cmtd. |
Scheduled Flow | Contact\_SCH\_SendBirthdayEmails | \[Entry = None\] A Scheduled flow that runs daily, checks if a contact is due a Birthday email, and sends it using the template marked Marketing\_Birthday |
Before Update Flow, on Account | Account\_BeforeUpdate\_SetTaxInformation | \[Entry = IsChanged(ShippingCountry)\] Changes the tax information, rate, and required elements based on the new country. |
After Update Flow, on Account | Account\_AfterUpdate\_NewBillingInfo | \[Entry = IsChanged(ShippingCountry)\] Fetches related future invoices and updates their billing country and billing information. Also sends a notification to Sales Support to ensure country change is legitimate. |
Event-Triggered Flow, creating Invoices, which triggers when a Sales Finished event gets fired | Invoice\_EVT\_SalesFinished | Creates an Invoice and notifies Invoicing about the new invoice to validate based on Sales information |
Record-triggered Email-sending Flow, on Account. | Account\_EML\_AfterUpdate | \[Entry = None\] Handles email notifications from Account based on record changes. |
Type | Name | Description |
---|---|---|
Get accounts matching active = true | Get\_ActiveAccounts | Fetches all accounts where IsActive = True |
Update Modified Contacts | Update\_ListModifiedContacts | Commits all changes from previous assignments to the database |
Creates an account configured during a Screen Flow in a variable called `var_thisAccount` | Create\_ThisAccount | Commits the Account to the database based on previous assignments. |
Type | Name | Description |
---|---|---|
Screen within a Flow | Label: Select Price Book Entries Name: S01\_SelectPBEs | Allows selection of which products will be added to the quote, based on pricebookentries fetched. |
Screen that handles errors based on a DML within a Flow | SERR01\_GET\_PBE | Happens if the GET on Pricebook Entries fails. Probably related to Permissions. |
Text element in the first screen of the flow | S01\_T01 | *Fill with actual Text from the Text element - there is no description field* |
DataTable in the first screen of the flow | S01\_LWCTable\_Products | *May be inapplicable as the LWCs may not offer a Description field.* |
Type | Name | Description |
---|---|---|
Formula to get the total number of Products sold | formula\_ProductDiscountWeighted | Weights the discount by product type and calculates actual final discount. Catches null values for discounts or prices and returns 0. |
Variable to store the recordId | recordId | Stores the record Id that starts the flow. *Exempt from normal conventions because legacy Salesforce behavior.* *Note: This var name is CASE SENSITIVE.* |
Record that we create from calculated values in the Flow in a Loop, before storing it in a collection variable to create them all | sObj\_This\_OpportunityProduct | The Opportunity Product the values of which we calculate. |
Type | Name | Description |
---|---|---|
Assignment to set the sObj\_This\_OpportunityProduct record values | SET\_OppProdValues | Sets the OppProd based on calculated discounts and quantities. |
Assignment to store the Opportunity Product for later creation in a collection variable | Name: STORE\_ThisOppProd Assignment: {!sObj\_coll\_OppProdtoCreate} Add {!sObj\_This\_OpportunityProduct} | Adds the calculated Opp Prod lines to the collvar to create. |
DML to create multiple records store in a collection sObj variable | CREATE\_OppProds | Creates the configured OppProd. |
Decision to check selected elements for processing | Decision: CHECK\_PBESelected Outcome one: CHECK\_PBESelected\_Yes Outcome two: CHECK\_PBESelected\_No Default Outcome: Catastrophic Failure | Check if at least one row was selected. Otherwise terminates to an error screen. |
Decision to sort elements based on criteria | Decision: DEC\_SortOverrides Outcome one: SortOverrides\_Fields Outcome two: SortOverrides\_Values Outcome three: SortOverrides\_Full Default Outcome: Catastrophic Failure | Based on user selection, check if we need to override information within the records, and which information needs to be overridden. |
Email Alert sent from Flow informing user of Invoice reception | EA01\_EI10\_InvoiceReceived | Sends template EI10 with details of the Invoice to pay |
The main reasons are because it is easy to deploy, and easy to revert to a prior version of anything you deploy as well - proper CI/CD depends on GIT being used, which ensures that everything you do can be rolled back in case of bugs.
**Deployment Method** | **Advantages** | **Disadvantages** |
---|---|---|
**Change Sets** | \- Easy to use with a graphical interface \- No additional setup required | \- Limited to connected orgs \- Manual and time-consuming \- No version control \- Can be done ad-hoc |
**Metadata API** | \- Supports complex deployments \- Can be automated \- Broad coverage | \- Requires programming knowledge \- Steeper learning curve |
**Salesforce CLI (SFDX)** | \- Advanced automation \- Supports modern DevOps practices \- Version control | \- Steeper learning curve \- Initial setup and configuration required \- Requires trained staff to maintain |
**Third-Party Tools** | \- User-friendly interfaces \- Advanced features and integrations | \- Additional costs \- May have proprietary limitations |
Despite the complexity inherent in SFDX-based deployments, the benefits are substantial. They enable easy and frequent deployments, better testing by customers, smoother go-lives, and a general reduction in stress around project development and deployment cycles. The structured approach of SFDX ensures that deployments are reliable, repeatable, and less prone to errors.
To stay fact-based: SFDX deployments allow deploying multiple times a *week* in a few minutes per deployment. This allows ***very easy user testing**,* and also allows finding *why* a specific issue cropped up. You can check the Examples section to see how and why this is useful.It is perfectly true that these deployments require more technical knowledge than third-party tools like Gearset or Changesets. It is our opinion that the tradeoff in productivity is worth the extra training and learning curve.
One thing that is often overlooked - you can NOT do proper CI/CD without plugging the deployment to your project management. This means the entire project management MUST be thought around the deployment logic.
**This training is split into the following chapters:** - **Chapter 1: The Why, When and By Whom** This chapter explores the fundamental considerations of Salesforce deployments within the context of consulting projects. It addresses: - **Why Deploy**: The importance and benefits of deploying Salesforce metadata throughout the project lifecycle, from the build phase to UAT to GoLive. - **When**: When in the project timeline should deployments be planned and executed to ensure smooth progress and mitigate risks. (Hint - it's often, but not in every org) - **By Whom**: Roles and responsibilities involved in the deployment process, such as consultants committing changes, architects reviewing commits and system elements, and release managers overseeing and executing deployments. **Chapter 2: The What, How and How Frequently** This chapter delves into the practical aspects of Salesforce deployments: - **What**: Overview of the deployment tools used, including GIT, SFDX, SGD, Bitbucket, and your trusty Command Line. - **How**: Detailed workflows and methodologies for using these tools effectively, tailored to specific roles within the project team (consultants, architects, release managers). - **How Frequently**: Recommendations on the frequency of deployments throughout the project timeline to maintain agility, minimize conflicts, and ensure continuous integration and delivery. **Chapter 3: An Example Project and Deployment Flow** This chapter provides a hands-on example to illustrate a typical project scenario and the corresponding deployment processes: - **Example Project**: Overview of a hypothetical Salesforce consulting project, including its scope and objectives. - **Deployment Flow**: Step-by-step walkthrough of the deployment lifecycle, from initial planning and setup through to execution and validation. - **Best Practices**: Highlighting best practices and potential challenges encountered during the deployment process. **Chapter 4: Configurations, Templates and Setup** This chapter focuses on the essential configurations and setup required to streamline the deployment process: - **Configurations**: Detailed guidance on configuring Salesforce environments for efficient deployment management. - **Templates**: Templates and reusable patterns for standardizing deployments and ensuring consistency across projects. - **Setup**: Practical tips and strategies for setting up deployment pipelines, integrating with version control systems, and automating deployment tasks. These chapters collectively provide a comprehensive guide to mastering Salesforce deployments within a consulting company, covering both strategic considerations and practical implementation details. # Chapter 1: The Why, When and By Whom This chapter explores the fundamental considerations of Salesforce deployments within the context of consulting projects. It addresses: - **Why Deploy**: The importance and benefits of deploying Salesforce metadata throughout the project lifecycle, from the build phase to UAT to GoLive. - **When**: When in the project timeline should deployments be planned and executed to ensure smooth progress and mitigate risks. (Hint - it's often, but not in every org) - **By Whom**: Roles and responsibilities involved in the deployment process, such as consultants committing changes, architects reviewing commits and system elements, and release managers overseeing and executing deployments. ## Why do I Deploy ? In traditional software development, deployments often occur to migrate changes between environments for testing or production releases. However, in the context of Continuous Integration (CI) and Salesforce development, deployments are just synchronization checkpoints for the application, irrelevant of the organization. Said differently, in CI/CD Deployments are just a way to push commits to the environments that require them.**CI deployments are frequent, automated, and tied closely to the development cycle. Deployments are never the focus in CI/CD, and what is important is instead the commits and the way that they tie into the project management - ideally into a ticket for each commit.**
In software development, a **commit** is the action of saving changes to a version-controlled repository. It captures specific modifications to files, accompanied by a descriptive message. Commits are atomic, meaning changes are applied together as a single unit, ensuring version control, traceability of changes, and collaboration among team members. Commits are part of using [Git.](http://rogerdudler.github.io/git-guide/ "git tutorial") Git is a distributed version control system used to track changes in source code during software development. It is free and widely used, within Salesforce and elsewhere.
So if deployments are just here to sync commits... ## Why do I commit ? > ##### As soon as a commit is useful, or whenever a day has ended. Commits should pretty much be done "as soon as they are useful", which often means you have fulfilled **one** of the following conditions: - you have finished working a ticket; - you have finished configuring or coding a self-contained logic, business domain, or functional domain; - you have finished correcting something that you want to be able to revert easily; - you have finished a hotfix; - you have finished a feature. This will allow you to pull your changes from the org, commit your changes referencing the ticket number in the Commit Message, and then push to the repository. This will allow others to work on the same repository without issues and to easily find and revert changes if required.You should also commit to your local repository whenever the day ends - in any case you can squash those commits together when you merge back to Main, so trying to delay commits is generally a bad idea.
Take the Salesforce-built "[Devops Center](https://help.salesforce.com/s/articleView?language=en_US&id=sf.devops_center_overview.htm&type=5)" for example. They tie every commit to a [Work Item](https://help.salesforce.com/s/articleView?id=sf.devops_center_work_item_changes_tab.htm&type=5) and allow you to chose which elements from the metadata should be added to the commit. They then ask you to add a quick description and you're done. This is the same logic we apply to tickets in the above description. If you're wondering "why not just use DevOps Center", the answer is generally "you definitely should if you can, but you sometimes can't because it is proprietary and it has limitations you can't work around". Also because if you learn how to use the CLI, you'll realise pretty fast that it goes WAY faster than DevOps Center.
To tie back to our introduction - this forces a division of work into Work Items, Tickets, or whatever other Agile-ism you use internally, and the project management level.**DevOps makes sense when you work iteratively, probably in sprints, and when the work to be delivered is well defined and packaged.**
This is because.... ## When do I Deploy ? Pretty much all the time, but not **everywhere.** In Salesforce CI/CD, the two main points of complexity in your existing pipeline are going to be: - The first integration of a commit into the pipeline - The merging of multiple commits, especially if you have the unfortunate situation where multiple people work in the same org. The reasons for this are similar but different. In the case of the first integration of a commit into the pipeline, most of the time, things should be completely fine. The problem is one that everyone in the Salesforce space knows very well. The Metadata API **sucks**. And sadly, SFDX... also isn't perfect. So sometimes, you might do everything right, but the MDAPI will throw some file or some setting that while valid in output, is invalid in input. Meaning Salesforce happily gives you something you can't deploy. If this happens, you will get an error when you first try to integrate your commit to an org. This is why some pre-merge checks ensure that the commit you did can be deployed back to the org. In the case of merging multiple commits, the reasons is **also** that the Metadata API **sucks.** It will answer the same calls with metadata that is not ordered the same way within the same file, which will lead Git to think there's tons-o-changes... Except not really. This is mostly fine as long as you don't have to merge your work with someone else's where they worked on the same piece of metadata - if so, there is a non-zero chance that the automated merging will fail. In both cases, the answer is "ask your senior how to solve this if the pipeline errors out". In both cases also, the pipeline should be setup to cover these cases and error out gracefully. ***"What does that have to do with when I deploy? Like didn't you get lost somewhere?"***The relation is simple - you should deploy pretty much ASAP to your remote repo, and merge frequently to the main work repository. You should also pull the remote work frequently to ensure you are in sync with others. Deploying to remote will run the integration checks to ensure things can be merged, and merging will allow others to see your work. Pulling the other's work will ensure you don't overwrite stuff.
Deploying to QA or UAT should be something tied to the project management cycle and is not up to an individual contributor. For example, you can deploy to QA every sprint end, and deploy to UAT once EPICs are flagged as ready for UAT (a manual step).
## Who Deploys ? Different people across the lifecycle of the project. **On project setup**, the DevOps engineer that sets up the pipeline should deploy and setup. **For standard work**, you should deploy to your own repo, and the automated system should merge to common if all's good. **For end of sprints**, the automated pipeline should deploy to QA. **For UAT**, the Architect assigned to the project should run the required pipelines. In most cases, the runs should be automatic, and key points should be covered by technical people. # Chapter 2: Software List This chapter explores the actual tools we are using in our example, the basic understanding needed for each tool, and an explanation of why we're doing things this way. In short, our example relies on: - **Git** - A **Git** frontend if you are unused to Git - **gitkraken** is nice for Windows, or **sourcetree.** - There's a quite nice VSCode extension that handles Git properly. - **Bitbucket** - A good text editor (**VSCode** is fine, I prefer Sublime Text) - The SF command line - **SFDMU**, a SF command line extension - **SGD**, a SF command line extension - **Code Analyzer**, a SF command line extension - **JIRA** - **A terminal emulator** (Cmdr is nice for windows, iTerm2 for MAC is fine) You can completely use other tools if your project, your client, or your leadership want you do use other things. The main reason we are using these in this example is that it relies on a tech stack that is very present with customers and widely used at a global level, while also leveraging reusable things as much as possible - technically speaking a lot of the configuration we do here is directly reusable in another pipeline provider, and the link to tickets is also something that can be integrated using another provider.**In short "use this, or something else if you know what you're doing".**
## **So What are we using** ### The CLI The first entrypoint into the pipeline is going to be the **Salesforce Command Line.**You can download it [here](https://developer.salesforce.com/tools/salesforcecli). If you want a graphical user interface, you should set up VSCode, which you can do by following this [Trailhead](https://trailhead.salesforce.com/content/learn/projects/quick-start-lightning-web-components/set-up-visual-studio-code). You can start using the CLI directly via the terminal if you already know what you're doing otherwise. If you're using VSCode, download [Azul ](https://www.azul.com/downloads/#zulu)as well to avoid errors down the line.We'll be using the Salesforce CLI to: - login to organizations, and avoid that pesky MFA; - pull changes from an organization once our config is done; - rarely, push hotfixes to a UAT org.
For some roles, mainly architects and developers, we will also use it to: - validate deploys to specific orgs in cases of hotfixes; - setup SGD jobs in cases of commit-based deploys or destructive changes; - setup SFDMU jobs for any data-based transfers. What this actually does is allow you to interact with Salesforce. We will use it to get the configuration, security, and setting files that we will then deploy.This allows us not only to deploy, but also to have a backup of the configuration, and an easy way to edit it via text edition software.
The configuration needed is literally just the installation to start - we'll set up a full project later down the line. ### GIT You'll then need to download [Git](https://git-scm.com/downloads), as well as a [GUI](https://git-scm.com/downloads/guis/) if you're not used to using it directly from the command line. Git is VERY powerful but also quite annoying to learn fully, which is why we will keep its usage simple in our case.We'll be using Git to: - version our work so we can easily go back to earlier configurations in case of issues; - document what we did when we modified something; - get the work that other people have done; - upload our work to the repositories for the project.
You'll need a bit more configuration once you're done installing - depending on the GUI you use (or if you're using the command line) the *how* depends on the exact software, but in short you'll need to [configure git](https://support.atlassian.com/bitbucket-cloud/docs/install-and-set-up-git/) with your user name and your user email.Logging in to Bitbucket and getting your repository from there will come later - once you've given your username and email, and configured your UI, we will consider that you are done for now.
If you're a normal user, this is all you'll see of git. If you're a Dev or an Architect, you'll also be using the Branches and Merges functions of Git - mostly through the Bitbucket interface (and as such, with Pull Requests instead of Merges). ### Bitbucket As said in intro, we're using bitbucket because we're using bitbucket. You can use Github, Gitlab, Gitea, whatever - but this guide is for bitbucket. Bitbucket, much like Salesforce, is a cloud solution. It is part of the Atlassian cloud offering, which also hosts JIRA, which we'll be configuring as well. You'll need to authenticate to your workspace (maybe get your Administrator to get you logins), in the format [https://bitbucket.org/myworkspace](https://bitbucket.org/myworkspace) You will see that Bitbucket is a Git Server that contains Git Repositories. In short, it is the central place where we'll host the different project repositories that we are going to use. Built on top of the Git server are also subordinate functions such as Pull Requests, Deployments, Pipelines - which we're all going to use. Seeing as we want this to be connected with our Atlassian cloud, we'll also ask you to go to [https://bitbucket.org/account/settings/app-passwords/](https://bitbucket.org/account/settings/app-passwords/) which allows you to create application passwords, and to create one for Git. In detail: - **Repositories:** Developers store their Salesforce metadata and code in Bitbucket repositories. Each repository can represent a project or a component of a larger Salesforce application. - **Branching:** Developers create branches for new features, bug fixes, or enhancements. This allows multiple developers to work on different parts of the codebase simultaneously without interfering with each other. - **Pull Requests:** When a feature or bug fix is complete, a pull request is created. Other team members review the changes before they are merged into the main branch, ensuring code quality and consistency. - **Commits:** Developers commit their changes to Bitbucket, providing a detailed commit message. These messages often include references to JIRA ticket numbers (e.g., "Fixed bug in login flow \[JIRA-123\]"). - **JIRA:** When a commit message includes a JIRA ticket number, JIRA can automatically update the status of the ticket, link the commit to the ticket, and provide traceability from issue identification to resolution. - **Pipelines:** Bitbucket Pipelines can be configured to automatically build, test, and deploy Salesforce code changes. This ensures that changes are validated before being merged and deployed to production. It does so using **Deployments** - which in bitbucket means "the installation of code on a remote server", in our case Salesforce. ### Extra Stuff #### CLI Extensions ##### SGD SGD, or [Salesforce-Git-Delta](https://github.com/scolladon/sfdx-git-delta) is a command line plugin that allows the CLI to automatically generate a package.xml and a destructivechanges.xml based on the difference between two commits. It allows you to do in Git what the CLI does alone using Source Tracking. Why is it useful then ? Because Source Tracking is sometimes buggy, and also because in this case we're using Bitbucket, so it makes generating these deployment files independent from our machines. SGD is very useful for inter-org deployment, which should technically be quite rare. ##### SFDMU SFDMU, or the [Salesforce Data Move Utility](https://help.sfdmu.com/), is another command line plugin which is dataloader on steroids for when you want to migrate data between orgs or back stuff up to CSVs. We use this because it allows migrating test data or config data (that last one should be VERY rare what with the presence of [CMTD](https://help.salesforce.com/s/articleView?id=sf.custommetadatatypes_overview.htm&language=en_US&type=5) now) very easily including if you have hierarchies (Contacts of Accounts, etc). ##### Code Analyzer #### AzulJDK Basically just Java, but free. We don't use the old Java runtime because licensing is now extremely expensive. #### A terminal emulator If you don't spend a lot of time in the Terminal, you might not see that terminals aren't all equal. A nice terminal emulator gives you things like copy/paste, better UX in general. It's just quality of life. #### A text editor You should use VSCode unless you really want to do everything in separate apps. If you're an expert you can use whatever floats your boat. ## # Chapter 3: Basic Machine Setup ## 1 - Install Local Software If you are admin on your machine, download [Visual Studio Code](https://code.visualstudio.com/) from this link. Otherwise, use whatever your IT has to install software, whether it be [Software Center](https://learn.microsoft.com/fr-fr/mem/configmgr/core/understand/software-center), opening a ticket, or anything else of that ilk. As long as you're doing that, you can also install [a JDK like AZUL](https://www.azul.com/downloads/#zulu), as well as [Git](https://git-scm.com/downloads), and a nice [terminal emulator.](https://cmder.app/) Also remember to install the [Salesforce CLI](https://developer.salesforce.com/tools/salesforcecli).These elements are all useful down the line, and doing all the setup at once avoids later issues.
## 2 - Configure the CLI Opening your beautiful terminal emulator, run `sf update` You should see `@salesforce/cli: Updating CLI` run for a bit.If you see an error saying `sf` is not a command or program, something went wrong during the installation in step 1. Contact your IT (or check the installation page of the CLI if you're Admin or not in an enterprise context).
Once that's done, run `echo y | sf plugins install sfdmu sfdx-git-delta code-analyze`Because sgd is not signed, you will get a warning saying that "This plugin is not digitally signed and its authenticity cannot be verified". This is expected, and you will have to answer `y` (yes) to proceed with the installation.
Once you've done that, run: `git config --global user.name "FirstName LastName"` replacing Firstname and Lastname with your own. `git config --global user.email "email@server.tld"` replacing the email with yours **If you're running Windows** - git config --global core.autocrlf true **If you're running Mac or Linux** - git config --global core.autocrlf inputThe above commands tell git who you are, and how to handle line endings. All of this setup has to be done once, and you will probably never touch it again.
Finally, run `java --version `If you don't see an error, and you see something like `openjdk 21.0.3 2024-04-16 LT` then you installed Zulu properly and you're fine.
## 3 - Configure VSCode Open up VSCode. Go to the Extensions in the side panel (it looks like three squares) [](https://wiki.sfxd.org/uploads/images/gallery/2024-07/image-1720101159401.png) and search for "Salesforce", then install - Salesforce Extensions Pack - Salesforce Extensions Pack (Expanded) - Salesforce Package.xml Generator for VS Code - Salesforce CLI Command Builder - Salesforce XML Formatter Then search for Atlassian and install "Jira and Bitbucket (Atlassian Labs)". Finally, search for and install "GitLens - Git supercharged". Then go to **Preferences > Settings > Salesforcedx-vscode-core: Detect Conflicts At Sync** and check this checkbox. Once all this is done, I recommend you go to the side panel, click on Source Control, and drag-and-drop both the Commit element and the topmost element to the right of the editor.All this setup allows you to have more visual functions and shortcuts. If you fail to install some elements, it cannot be guaranteed that you will have all the elements you are supposed to.
This concludes basic machine setup. All of this should not have to be done again on an already configured machine.
# Chapter 4 - Base Project Setup This chapter explores how to set up your project management and version control integration, ensuring proper tracking from requirement to deployment. ## Initial Project Creation ### SFDX Project Setup Create Base Project ``` sf project generate --name "your-project-name" --template standard --namespace "your_namespace" # if applicable --default-package-dir force-app ``` ### Required Project Structure ``` your-project-name/ ├── config/ │ └── project-scratch-def.json ├── force-app/ │ └── main/ │ └── default/ ├── scripts/ │ ├── apex/ │ └── soql/ ├── .forceignore ├── .gitignore ├── package.json └── sfdx-project.json ``` ### Configuration Files Setup `.forceignore` Essential Entries ``` # Standard Salesforce ignore patterns **/.eslintrc.json **/.prettierrc **/.prettierignore **/.sfdx **/.sf **/.vscode **/jsconfig.json # Package directories **/force-app/main/default/profiles **/force-app/main/default/settings ``` `.gitignore` Essential Entries