Flow Conventions

Naming and structural conventions related to Flows and the Cloud Flow Engine.

Flow General Notes

Generalities

As of writing this page, August 10th 2023, Flows are primary source of automation on the Salesforce platform. We left this sentence because the earlier iteration (from 2021) identified that Flows would replace Process Builder and we like being right.

It is very important to note that Flows have almost nothing to do, complexity-wise, with Workflows, Process Builder, or Approval Processes. Where the old tools did a lot of (over)-simplifying for you, Flow exposes a lot of things that you quite simply never had to think about before, such as execution context, DML optimization, batching, variables, variable passing, etc.

So if you are an old-timer upgrading your skills, note that a basic understanding of programming (batch scripting is more than enough) helps a lot with Flow.
If you're a newcomer to Salesforce and you're looking to learn Flow, same comment - this is harder than most of the platform (apart from Permissions) to learn and manipulate. This is normal.

Intended Audience

These conventions are written for all types of Salesforce professionals to read, but the target audience is the administrator of an organization. If you are an ISV, you will have considerations regarding packaging that we do not, and if you are a consultant, you should ideally use whatever the client wants (or the most stringent convention available to you, to guarantee quality).

On Conventions

As long as we're doing notes: conventions are opinionated, and these are no different. Much like you have different APEX trigger frameworks, you'll find different conventions for Flow. These specific conventions are made to be maintainable at scale, with an ease of modification and upgrade. This means that they by nature include boilerplate that you might find redundant, and specify very strongly elements (to optimize cases where you have hundreds of Flows in an organization). This does not mean you need to follow everything. A reader should try to understand why the conventions are a specific way, and then decide whether or not this applies to their org.

At the end of the day, as long as you use any convention in your organization, we're good. This one, another one, a partial one, doesn't matter. Just structure your flows and elements.

On our Notation

Finally, regarding the naming of sub-elements in the Flows: we've had conversations in the past about the pseudo-hungarian notation that we recommend using. To clarify: we don't want to use Hungarian notation. We do so because Flow still doesn't split naming schemes between variables, screen elements, or data manipulation elements. This basically forces you to use Hungarian notation so you can have a var_boolUserAccept and a S01_choiceUserAccept (a variable holding the result of whether a user accepts some conditions, and the presentation in radio buttons of said acceptance), because you can't have two elements just named UserAccept even if technically they're different.

On custom code, plugins, and unofficialSF

On another note: Flow allows you to use custom code to extend its functionality. We define "custom code" by any LWC, APEX Class, and associated that are written by a human and plug into flow. We recommend using as little of these elements as possible, and as many as needed. This includes UnofficialSF.

Whether you code stuff yourself, or someone else does it for you, Custom Code always requires audit and maintenance. Deploying UnofficialSF code to your org basically means that you own the maintenance and audit of it, much like if you had developed it yourself. We emit the same reservations as using any piece of code on GitHub - if you don't know what it does exactly, you shouldn't be using it. This is because any third-party code is not part of your MSA with Salesforce, and if it breaks, is a vector of attack, or otherwise negatively impacts your business, you have no official support or recourse.

This is not to say that these things are not great, or value-adding - but you are (probably) an admin of a company CRM, which means your first consideration should be user data and compliance, and ease of use coming second.

Bonus useless knowledge: Flows themselves are just an old technology that Salesforce released in 2010: Visual Process Manager. That itself is actually just a scripting language: “The technology powering the Visual Process Manager is based on technology acquired from Informavores, a call scripting startup Salesforce bought last year.” (2009) Source

What Automation do I create Flowchart

image-1610555495687.png

Flow Meta Conventions

Read these Resources first

  1. The official Flows best practice doc. Note we agree on most things. Specifically the need to plan out your Flow first.

  2. The Flows limits doc. If you don't know the platform limits, how can you build around them?

  3. The Transactions limits doc. Same as above, gotta know limits to play around them.

  4. The What Automation Do I Create Flowchart. Not everything needs to be a Flow.
  5. The Record-Triggered Automation Guide, if applicable.

Best Practices

These are general best practices that do not pertain to individual flows but more to Flows in regards to their usage within an Organization.

On Permissions

Flows should ALWAYS execute in the smallest amount of permissions possible for it to execute a task.
Users should also ideally not have access to Flows they don't require.
Giving Setup access so someone can access DeveloperName is bad, and you should be using custom labels to store the ids and reference that instead, just to limit setup access.

Use System mode sparingly. It is dangerous.
If used in a Communities setting, I REALLY hope you know why you're exposing data publicly over the internet or that you're only committing information with no GETs.

Users can have access granted to specific Flows via their Profiles and Permission Sets, which you should really be using to ensure that normal users can't use the Flow that massively updates the client base for example.

Record-Triggered Flows, and Triggers should ideally not coexist on the same object in the same Context.

"Context" here means the APEX Trigger Context. Note that not all of these contexts are exposed in Flow:

- Screen Flows
execute outside of these contexts, but Update elements do not allow you to carry out operations in the before context.
- Record Triggered Flow execute either in before or after contexts, depending on what you chose at the Flow creation screen (they are named "Fast Record Updates" and "Other Objects and Related Actions", respectively, because it seems Salesforce and I disagree that training people on proper understanding of how the platform works is important).

The reason for the "same context" exclusivity is in case of multiple Flows and heavy custom APEX logic: in short, unless you plan explicitly for it, the presence of one or the other forces you to audit both in case of additional development, or routine maintenance.
You could technically leverage Flows and APEX perfectly fine together, but if you have a before Flow and a before Trigger both doing updates to fields, and you accidentally reference a field in both... debugging that is going to be fun.

So if you start relying on APEX Triggers, while this doesn’t mean you have to change all the Flows to APEX logic straight away, it does mean you need to plan for a migration path.

In the case were some automations need to be admin editable but other automations require custom code, you should be migrating the Triggers to APEX, and leveraging sub-flows which get called from your APEX logic.

Flow List Views should be used to sort and manage access to your Flows easily

The default list view is not as useful as others can be.
We generally suggest doing at minimum one list view, and two if you have installed packages that ship Flows:

Flows are considered Code for maintenance purposes

Do NOT create or edit Flows in Production, especially a Record-Triggered flow.
If any user does a data load operation and you corrupt swaths of data, you will know the meaning of “getting gray hairs”, unless you have a backup - which I am guessing you will not have if you were doing live edits in production.

No, this isn't a second helping of our note in the General Notes.
This is about your Flows - the ones you built, the ones you know very well and are proud of.
There are a swath of reasons to consider Flows to be Code for maintenance purposes, but in short:

In short - it's a short and admin-friendly development, but it's still development.

On which automation to create

In addition to our (frankly not very beautiful Flowchart), when creating automations, the order of priority should be:

On APEX and LWCs in Flows

To reiterate, if you install unpackaged code in your organization, YOU are responsible for maintaining it.

Flow Testing and Flow Tests

If at all possible, Flows should be Tested. This isn't always possible because of these considerations, (which aren't actually exhaustive - I have personally seen edge cases where Tests fail but actual function runs, because of the way Tests are build, and I have also seen deployment errors linked to Tests). Trailheads exist to help you get there.

A Flow Test is not just a way to check your Flow works. A proper test should:
- Test the Flow works
- Test the Flow works in other Permission situations
- Test the Flow doesn't work in critical situations you want to avoid [if you're supposed to send one email, you should probably catch the situation where you're sending 5 mil]
... and in addition to that, a proper Flow Test will warn you if things stop working down the line.

Most of these boilerplates are negative bets against the future - we are expecting things to break, people to forget configuration, and updates to be made out of process. Tests are a way to mitigate that.

We currently consider Flow Tests to be "acceptable but still bad", which we expect to change as time goes on, but as it's not a critical feature, we aren't sure when they'll address the current issues with the tool.

Note that proper Flow Testing will probably become a requirement at some point down the line.

On Bypasses

Flows, like many things in Salesforce, can be configured to respect Bypasses.
In the case of Flows, you might want to call these "feature flags".

This is a GREAT best practice, but is generally overkill unless you are a very mature org with huge amounts of processes.


Flow Structural Conventions - Common Core

As detailed in the General Notes section, these conventions are heavily opinionated towards maintenance and scaling in large organizations. The conventions contain:

Due to their nature of being triggered by the user and outside of a specific record context, Screen Flows do not require specific structural adaptations at the moment that are not part of the common core specifications.

Common Core Conventions

On System-Level Design

Do not do DMLs or Queries in Loops.

Simpler: No pink squares in loops.

DML is Data Manipulation Language. Basically it is what tells the database to change stuff. DML Operations include Insert, Update, Upsert, and Delete, which you should know from Data Loader or other such tools.

Salesforce now actually warns you when you're doing this, but it still bears saying.

A screenshot indicating a pink element (create records) within a loop. The image is labelled "don't do this".Don't do this

You really must not do this because:
All Pink (DML or Query) elements should have Error handling

Error, or Fault Paths, are available both in Free Design mode and the Auto-Layout Mode. In Free mode, you need to handle all possible other paths before the Fault path becomes available. In Auto-Layout mode, you can simply select Fault Path.

Screen Flow? Throw a Screen, and display what situation could lead to this. Maybe also send the Admin an email explaining what happened.

Screen Flow Error HandlingRecord-triggered Flow? Throw an email to the APEX Email Exception recipients, or emit a Custom Notification.
Hell, better yet throw that logic into a Subflow and call it from wherever.

(Note that if you are in a sandbox with email deliverability set to System Only, regular flow emails and email alerts will not get sent.)

A screen flow with multiple FAULT paths going to proper error handling.

Handling Errors this way allows you to:
- not have your users presented with UNEXPECTED EXCEPTION - YOUR ADMIN DID THINGS BADLY
- maybe deflect a few error messages, in case some things can be fixed by the user doing things differently
- have a better understanding of how often Errors happen.

You want to supercharge your error handling? Audit Nebula Logger to see if it can suit your needs. With proper implementation (and knowledge of how to service it, remember that installed code is still code that requires maintenance), Nebula Logger will allow you to centralize all logs in your organization, and have proper notification when something happens - whether in Flow, APEX, or whatever.

Don't exit loops based on decision checks

The Flow engine doesn't support that well and you will have weird and confusing issues if you ever go back to the main loop.

A flow with a Decision element allowing an exit from a Loop, which is a bad practice.

Don’t do this either - always finish the loop

Issues include variables not being reset, DML errors if you do come back to the loop, and all around general unpredictable situations.
You can still do this if you absolutely NEVER come back to the loop, but it's bad design.

Do not design Flows that will have long Wait elements

This is often done by Admins coming from Workflow or Process Builder space, where you could just say "do that 1 week before contract end date" or "1 day after Opportunity closure". This design is sadly as outdated as the tools that permitted it.
Doing this will have you exceed your Paused Interview limits, and actions just won't be carried out.

A proper handling of "1 day before/after whenever", in Flow, is often via a Scheduled Flow.
Scheduled Flows execute once daily (or more if you use plugins to allow it), check conditions, and execute based on these conditions. In the above case, you would be creating a Scheduled Flow that :

Despite it not being evident in the Salesforce Builder, there is a VERY big difference between the criteria in the Schedule Flow execution start, and an initial GET.
- Putting criteria in the Start Element has less conditions available, but effectively limits the scope of the Flow to only these records, which is great in big environments. It also fires One Flow Interview per Record, and then bulkifies operations at the end - so doing a GET if you put a criteria in the Start element should be done after due consideration only.
- On the opposite, putting no criteria and relying on an initial Get does a single Flow Interview, and so will run less effectively on huge amounts of records, but does allow you to handle more complex selection criteria.

Do not Over-Optimize your Flows

When Admins start becoming great at Flows, everything looks like a Flow.
The issue with that is that sometimes, Admins will start building Flows that shouldn't be built because Users should be using standard features (yes, I know, convincing Users to change habits can be nigh impossible but is sometimes still the right path)... and sometimes, they will keep at building Flows that just should be APEX instead.

If you are starting to hit CPU timeout errors, Flow Element Count errors, huge amounts of slowness... You're probably trying to shove things in Flow that should be something else instead.

APEX has more tools than Flows, as do LWCs. Sometimes, admitting that Development is necessary is not a failure - it's just good design.

On Flow-Specific Design

Flows should have one easily identifiable Triggering Element

This relates to the Naming Conventions.

Flow Type
Triggering Element
Record-Triggered Flows It is the Record that triggers the DML
Event-based Flows It should be a single event, as simple as possible.
Screen Flows This should be either a single recordId, a single sObject variable, or a single sObject list variable. In all cases, the Flow that is being called should query what it needs by itself, and output whatever is needed in its context.
Subflows The rule can vary - it can be useful to pass multiple collections to a Subflow in order to avoid recurring queries on the same object. However, passing multiple single-record variables, or single text variables, to a Subflow generally indicates a design that is overly coupled with the main flow and should be more abstracted.

A screejnshot of a Flow List view

Fill in the descriptions

You'll thank yourself when you have to maintain it in two years.
Descriptions should not be technical, but functional. A Consultant should be able to read your Flow and know what it does technically. The Descriptions should therefore explain what function the Flow provides within the given Domain (if applicable) of the configuration.

A screenshot of Flow descriptions.Descriptions shouldn’t be too technical.

Don't use the "Set Fields manually" part of Update elements

Yes, it's possible. It's also bad practice. You should always rely on a record variable, which you Assign values to, before using Update with "use the values from a record variable". This is mainly for maintenance purposes (in 99% of cases you can safely ignore pink elements in maintenance to know where something is set), but is also impactful when you do multi-record edits and you have to manipulate the record variable and store the resulting manipulation in a record collection variable.

A screenshot of the "Get > Assign > Update" pattern in Flow elements.

A screenshot of the assignment details.

Try to pass only one Record variable or one Record collection to a Flow or Subflow

See "Tie each Flow to a Domain".
Initializing a lot of Record variables on run often points to you being able to split that subflow into different functions. Passing Records as the Triggering Element, and configuration information as variables is fine within reason.

In the example below, the Pricebook2Id variable should be taken from the Order variable.

A screenshot of the Flow Debug run screen.

Try to make Subflows that are reusable as possible.

A Subflow that does a lot of different actions will probably be single-use, and if you need a subpart of it in another logic, you will probably build it again, which may lead to higher technical debt.
If at all possible, each Subflow should execute a single function, within a single Domain.
Yes, this ties into "service-based architecture" - we did say Flows were code.

Do not rely on implicit references

This is when you query a record, then fetch parent information via {MyRecord.ParentRecord__c.SomeField__c}. While this is useful, it’s also very prone to errors (specifically with fields like RecordType ) and makes for wonky error messages if the User does not have access to one of the intermediary records.
Do an explicit Query instead if possible, even if it is technically slower.

Tie each Flow to a Domain

This is also tied to Naming Conventions. Note that in the example below, the Domain is the Object that the Flow lives on. One might say it is redundant with the Triggering Object, except Scheduled Flows and Screen Flows don't have this populated, and are often still linked to specific objects, hence the explicit link.

Domains are definable as Stand-alone groupings of function which have a clear Responsible Persona.

A schema of Domain segregation, illustrating that Domains are self-contained and communication wiht other domains is done via Events.

Communication between Domains should ideally be handled via Events

In short, if a Flow starts in Sales (actions that are taken when an Opportunity closes for example) and finishes in Invoicing (c