Series 5: Workflow and Orchestration
Part of the Modern Software Development Without the Hype series
Why orchestration matters
Modern platforms rarely collapse dramatically. They degrade quietly, often when nobody can answer a simple question fast enough: did the work actually finish and if not, where did it stall?
Logs stream, dashboards blink and events move, yet uncertainty persists. This gap exists because most organisations monitor behaviours rather than supervise outcomes. Orchestration fills that gap.
Orchestration is the explicit coordination of work across systems. It ensures that sequences occur correctly, deviations are handled deliberately and completion is visible rather than inferred. Messaging carries information, automation executes actions and monitoring reports conditions. Orchestration ensures that outcomes actually occur.
Orchestration before the term existed
Long before the word appeared in cloud literature, industries orchestrated by necessity.
Banking settlement flows arrived via FTP. Control totals were verified. Files were transformed, distributed and processed. When conditions deviated, recovery procedures executed and operators escalated. Nobody called this orchestration, but supervision existed because the business required certainty that work completed.
Modern complexity simply forced this implicit skill to become explicit.
Why orchestration became visible
As estates grew diverse, embedding supervision into scripts and institutional knowledge became fragile. Integrations increased, timing dependencies multiplied and compliance scrutiny intensified. Organisations discovered that invisible supervision was risky.
They needed durable state, visible progress, predictable retries, compensating behaviour and auditable history. Explicit orchestration emerged because reliability demanded it, not because vendors invented a new pattern.
The overlooked reality: orchestration spans unlike systems
Most enterprise estates are not uniform. They run legacy UNIX services, Windows workloads, cloud services, SaaS platforms and bespoke line-of-business systems.
No single runtime can command everything. This is where orchestration earns its value. It supervises work across difference rather than forcing standardisation.
An orchestrator may wait for a mainframe batch to finish, trigger a cloud-based behavioural model, poll a SaaS CRM for customer status, then initiate settlement in a UNIX platform — all while maintaining durable state, enforcing timeouts and recording progress coherently.
Modern engines simply make that supervision visible and maintainable.
Directive versus reactive orchestration
Orchestration manifests differently depending on where authority resides.
Directive orchestration tells participating systems what to do. A cloud engine invokes downstream services directly, waits for results and proceeds accordingly. Azure Durable Functions or AWS Step Functions reflect this model well because services willingly participate.
Reactive orchestration cannot command anything. It observes authoritative systems and coordinates follow-on work when events occur. A settlement file lands. A ledger posts a batch. A quoting engine produces behavioural signals. The orchestrator supervises outcomes because upstream systems cannot be driven externally.
Most real enterprises lean towards reactive patterns because critical systems are not designed to be instructed by others.
A clearer practical lens: insurance quoting
Quoting flows expose orchestration need clearly. A customer requests a quote. They may or may not be identifiable. Behavioural models estimate conversion likelihood, then either route the customer to an adviser immediately or let them continue online.
A quote may proceed to purchase, stall indefinitely, represent a bot or reflect comparison shopping where no decision occurs today. Traditional implementations evaluate this behaviour overnight through batch processing.
Orchestration changes this dynamic. Instead of waiting until tomorrow, the orchestrator supervises behaviour in real time. It interprets abandonment signals, identifies intent, communicates outcomes and initiates actions.
The supervision outcome is not “a quote exists.” It is recognition that intent requires intervention, and ensuring something — or someone — is responsible for progressing it.
Why monitoring is not orchestration
Many teams mistake observability for orchestration. Monitoring surfaces the fact that authentication latency has spiked from milliseconds to eight seconds. It alerts and plots trends.
Orchestration decides what happens next — retry, fail fast, route to standby or escalate. Monitoring sees; orchestration governs.
The absence of orchestration explains why issues get recognised only after effects ripple through systems rather than when behaviour first deviates.
How modern engines support orchestration
Orchestration platforms formalise supervision. They provide durable workflow state, timeout logic, compensation behaviour, escalation paths and progress visibility. They make supervision maintainable rather than accidental.
They do not invent orchestration; they reduce the cost of doing it properly.
How orchestration engines differ
Although they solve the same category of problem, engines adopt different philosophies.
Temporal.io — orchestration expressed as durable code
Temporal treats workflows as ordinary code that executes durably. State is persisted, resumed and compensated correctly. This suits heterogeneous estates and long-running processes where failure handling is nuanced. It requires thinking durably rather than procedurally but repays it with flexibility and reliability.
Camunda — orchestration as business modelling
Camunda expresses workflows through BPMN diagrams readable by analysts, developers and compliance functions. Execution semantics align with model semantics. This fits regulated sectors and organisations needing visible process interpretation. Its cost is modelling discipline and heavier runtime footprint; its value is shared meaning between business and technology.
Azure Durable Functions — orchestration embedded in the cloud fabric
Durable Functions coordinate cloud behaviours with durable execution managed by Azure. This fits directive orchestration where workloads already live in Azure and state is delegated to the provider. Its appeal is operational ease; its limitation is dependency on Azure semantics and reduced portability.
AWS Step Functions — orchestration as service composition
Step Functions coordinate AWS services using explicit state-machine logic. This suits cloud-native teams assembling service behaviour inside AWS. It is less effective for complex recovery logic or workflows anchored in external enterprise systems. Its value is assembly speed; its constraint is expressiveness and vendor lock-in.
Thinking clearly about these engines
The engines differ less in capability than in worldview.
Temporal assumes workflows are logic.
Camunda assumes workflows are business processes.
Azure and AWS assume workflows are composition of provider services.
Choosing one is a decision about who needs to understand the workflow and where supervision should reside.
Cost, trade-offs and dependency
The economics are real, even if firms rarely quantify them explicitly.
Hand-built supervision appears inexpensive until knowledge evaporates or recovery logic becomes operational debt.
Temporal introduces infrastructure runtime cost and a learning curve. For a mid-sized enterprise, operating Temporal tends to fall in the low-to-mid six-figure annual range once resilience, operations and skill development are considered. It reduces bespoke recovery cost significantly.
Camunda introduces modelling effort. Organisations new to formal process modelling often discover that modelling and alignment extend delivery timelines by weeks or months rather than days — worthwhile when compliance and shared understanding matter.
Cloud orchestrators offer low initial friction but accumulate dependency. Once dozens of workflows exist, migration becomes expensive because orchestration logic is inseparable from provider semantics.
The real comparison is not licence fees; it is the long-term cost of predictable supervision versus the hidden cost of hoping things self-resolve.
When each approach fits
Temporal.io suits heterogeneous estates, mission-critical workflows and long-running logic requiring compensation.
Camunda fits regulated sectors and business-driven environments where visibility and auditability matter.
Azure Durable Functions and Step Functions fit cloud-centric organisations orchestrating service behaviour rather than deep business flows.
Hand-built orchestration remains appropriate when legacy platforms cannot participate or where logic is inseparably bound to existing system behaviour.
There is no universal winner; only fitness for context.
The essential insight
Orchestration has always existed. Historically it lived in scripts, job controllers and operational knowledge. Modern platforms expose it, stabilise it and reduce the cost of maintaining it.
Mature architectures treat orchestration as a first-class capability that binds unlike systems into reliable behaviour. Immature ones assume monitoring will compensate for the absence of supervision.
What comes next
The next article examines observability. Orchestration ensures outcomes; observability reveals reality. Together they form the backbone of reliability.



Really strong take on the directive vs reactive orchestration distinction. Most discussions gloss over this, but in practice the differnce between commanding systems versus observing them fundamentaly shapes architecture decisions. The insurance quoting example nails it - realtime supervision of intent beats batch processing every time. One thing worth adding: the hidden cost of cloud orchestrators isnt just vendor lock-in, its the cognitive overhead of translating business logic into state machine DSLs that nobody outside the team can read.