Split Step
Loop over a list and run a subflow for each item.
What it does
- Iterates through an array and executes a nested set of steps for every item.
- Inside the loop, the current element is exposed as
item.*(for example,item.sku,item.quantity,item.order_id). - Can run sequentially or in parallel.
- Can either fail fast on the first item error, or continue and return a mixed set of successes and failures.
When to use it
- When an additional action is needed to each element:
- Order line items
- Orders in a batch
- Products from a catalog sync
- Rows from a CSV import
- Files in a folder / attachment list
- When a downstream system expects one call per item (for example, “reserve stock per SKU” or “create one shipment line per item”).
- When you want to speed up processing using parallel execution, but keep the scenario readable (no copy‑pasting the same steps).
- For business processes where each record is independent (or can be made independent) and your downstream system works item-by-item.
How to configure
- Select the source array to loop over (for example,
data.items,data.order.lines,steps.fetch.outputs.items). - Build the subflow: add the steps that should run for each item (this subgraph is what gets repeated).
- Decide how execution should behave. Depending on your error and parallel settings, you either stop the whole scenario on the first failure, or keep going and collect per‑item outcomes.
- Mode:
- Sequential: preserve order and limit load on external systems.
- Parallel: maximize throughput when each item is independent.
- Parallelism limit (when running in parallel):
- Use this to cap concurrency to protect APIs and rate limits.
0typically means “no limit” (use with caution for large lists).
- Continue on failure: Continue processing remaining items even if some iterations fail
- Mode:
- Use
item.*references inside the subflow to map values from the current element. - (Optional) Define input/output schemas/variables:
- Input schema: describes what each
itemin the array looks like. The input to Split can come directly from the scenario input data or from previous steps (for example, the output of a “Parse CSV” step that turns file rows intosteps.parse_csv.outputs.items). - Output schema: describes what you want to see after the Split finishes – for example, a list of successfully processed records, a list of failed records with reasons, or a summary with counts.
- Variables: use variables when you need per‑item working values that are derived from the input. For example, if Split receives items with
item.basePriceand you want to add a margin for each item, define a variable likeitem.priceWithMargin = item.basePrice * (1 + marginPercent)and use that as the final price in the Split output instead of the raw base price.
- Input schema: describes what each
Example: CSV input and Split output
- Input:
- A CSV file with product rows (parsed earlier in the scenario).
- A “Parse CSV” step produces
steps.parse_csv.outputs.items– an array where each row becomes aniteminside the Split.
- Split configuration:
value:steps.parse_csv.outputs.items- Input schema for each item:
{ sku, basePrice, currency, channel } - Variable:
item.priceWithMargin = item.basePrice * 1.15
- Output:
- For each item, the subflow:
- Calculates
priceWithMargin. - Calls a pricing or channel API to upsert the product price.
- Calculates
- The Split output (referenced later as
steps.split_pricing.outputs) can be modeled as:processed: list of items withsku,priceWithMargin, and an externalstatus.failed: list of items withskuanderrorMessage.
- For each item, the subflow:
Typical use cases
- Order processing:
- Reserve inventory per line item.
- Create or update fulfillment lines per item.
- Validate each line item and fail fast on the first invalid entry.
- Catalog sync:
- Normalize each product, then upsert to Shopify / marketplace.
- Enrich each SKU with data from an external API.
- Imports and batch jobs:
- Process CSV rows (validate → transform → post to API).
- Iterate through paginated API results and process each record.
- Automation at scale:
- “Best effort” updates where failures should not block the whole run (continue on error, collect results, summarize in Finish).