iWF is a platform providing an all-in-one tooling for building long-running business application. It provides an abstraction for persistence(database, elasticSearch) and more! It aims to provide clean, simple and easy to use interface, like an iPhone.
It will not make you a 10x developer...but you may feel like one!
We call long running process Workflow.
It's a simple and powerful WorkflowAsCode general purpose workflow engine.
The server is back by Cadence/Temporal as an interpreter.
Related projects:
- API definition between SDKs and server.
- iWF Java SDK
- iWF Java Samples
- iWF Golang SDK
- iWF Golang Samples
- More SDKs? Contribution is welcome. Any languages can be supported as long as implementing the IDL.
- Community & Help
- What is iWF
- Why iWF
- How to run this server
- How to migrate from Cadence/Temporal
- Monitoring and Operations
- Development Plan
- Some history
- Contribution
A iWF application will host a set of iWF workflow workers. The workers host two REST APIs of WorkflowState start and decide using iWF SDKs.
The application will call iWF server to interact with workflow executions -- start, stop, signal, get results, etc, using iWF SDKs.
iWF server hosts those APIs(also REST) as a iWF API service. The API service will call Cadence/Temporal service as the backend.
iWF server also hosts Cadence/Temporal workers which hosts an interpreter workflow. Any iWF workflows are interpreted into this Cadence/Temporal workflow. The interpreter workflow will invoke the two iWF APIs of the application workflow workers. Internally, the two APIs are executed by Cadence/Temporal activity. Therefore, all the REST API request/response with the worker are stored in history events which are useful for debugging/troubleshooting.
iWF lets you build long-running applications by implementing the workflow interface, e.g.
Java Workflow interface
or Golang Workflow interface.
An instance of the interface is a WorkflowDefinition. User applications use IwfWorkflowType to differentiate WorkflowDefinitions.
A WorkflowDefinition contains several WorkflowState e.g.
Java WorkflowState interface
or Golang WorkflowState interface.
A WorkflowState is implemented with two APIs: start and decide.
startAPI is invoked immediately when a WorkflowState is started. It will return someCommandsto server. When the requestedCommandsare completed,decideAPI will be triggered.decideAPI will decide next states to execute. Next states be multiple, and can be re-executed as differentstateExecutions.
Application can start a workflow instance with a workflowId for any workflow definition. A workflow instance is called WorkflowExecution.
iWF server returns runId of UUID as the identifier of the WorkflowExecution. The runId is globally unique.
WorkflowId uniqueness: At anytime, there must be at most one WorkflowExecution running with the same workflowId. However, after a previous WorkflowExecution finished running (in any closed status),
application may start a new WorkflowExecutions with the same workflowId using appropriate IdReusePolicy.
There must be at least one WorkflowState being executed for a running WorkflowExecution. The instance of WorkflowState is called StateExecution.
Depends on the context, the only word
workflowmay mean WorkflowExecution(most commonly), WorkflowDefinition or both.
These are the three command types:
SignalCommand: will be waiting for a signal from external to the workflow signal channel. External application can use SignalWorkflow API to signal a workflow.TimerCommand: will be waiting for a durable timer to fire.InterStateChannelCommand: will be waiting for a value being published from another state in the same workflow execution
Note that start API can return multiple commands, and choose different DeciderTriggerType for triggering decide API:
AllCommandCompleted: this will wait for all command completedAnyCommandCompleted: this will wait for any command completed
iWF provides super simple persistence abstraction for workflow to use. Developers don't need to touch any database system to register/maintain the schemas. The only schema is defined in the workflow code.
DataObjectis- sharing some data values across the workflow
- can be retrieved by external application using GetDataObjects API
- can be viewed in Cadence/Temporal WebUI in QueryHandler tab
SearchAttributeis similarly:- sharing some data values across the workflow
- can be retrieved by external application using GetSearchAttributes API
- search for workflows by external application using
SearchWorkflowAPI - search for workflows in Cadence/Temporal WebUI in Advanced tab
- search attribute type must be registered in Cadence/Temporal server before using for searching because it is backed up ElasticSearch
- the data types supported are limited as server has to understand the value for indexing
- See Temporal doc and Cadence doc to understand more about SearchAttribute
StateLocalis for- passing some data values from state API to decide API in the same WorkflowState execution
RecordEventis for- recording some events within the state execution. They are useful for debugging using Workflow history. Usually you may want to record the input/output of the dependency RPC calls.
Logically, each workflow type will have a persistence schema like below:
+-------------+-------+-----------------+-----------------+----------------------+----------------------+-----+
| workflowId | runId | dataObject key1 | dataObject key2 | searchAttribute key1 | searchAttribute key2 | ... |
+-------------+-------+-----------------+-----------------+----------------------+----------------------+-----+
| your-wf-id1 | uuid1 | valu1 | value2 | keyword-value1 | 123(integer) | ... |
+-------------+-------+-----------------+-----------------+----------------------+----------------------+-----+
| your-wf-id1 | uuid2 | value3 | value4 | keyword-value2 | 456(integer) | ... |
+-------------+-------+-----------------+-----------------+----------------------+----------------------+-----+
| your-wf-id2 | uuid3 | value5 | value5 | keyword-value3 | 789(integer) | ... |
+-------------+-------+-----------------+-----------------+----------------------+----------------------+-----+
| ... | ... | ... | ... | ... | ... | ... |
+-------------+-------+-----------------+-----------------+----------------------+----------------------+-----+
There are two major communication mechanism in iWF:
SignalChannelis for receiving input from external asynchronously. It's used withSignalCommand.InterStateChannel: for interaction between state executions. It's used withInterStateChannelCommand.
Client APIs are hosted by iWF server for user workflow application to interact with their workflow executions.
- Start workflow: start a new workflow execution
- Stop workflow: stop a workflow execution
- Signal workflow: send a signal to a workflow execution
- Search workflow: search for workflows using a query language like SQL with search attributes
- Get workflow: get basic information about a workflow like status and results(if completed or waiting for completed)
- Get workflow data objects: get the dataObjects of a workflow execution
- Get workflow search attributes: get the search attributes of a workflow execution
- Reset workflow: reset a workflow to previous states
- See Slide deck for what problems it is solving
- See Design doc for how it works
- Check out this doc to understand some history
iWF is an application platform that provides you a comprehensive tooling:
- WorkflowAsCode for highly flexibile/customizable business logic
- Parallel execution of multiple threads of business
- Persistence storage for intermediate states stored as "dataObjects"
- Persistence searchable attributes that can be used for flexible searching, even full text searching, backed by ElasticSearch
- Receiving data from external system by Signal
- Durable timer, and cron job scheduling
- Reset workflow to let you recover the workflows from bad states easily
- Highly testable and easy to maintain
- ...
Checkout this repo, go to the docker-compose folder and run it:
cd docker-compose && docker-compose upThis by default will run Temporal server with it.
And it will also register a default namespace and required search attributes by iWF.
Link to WebUI: http://localhost:8233/namespaces/default/workflows
By default, iWF server is serving port 8801, server URL is http://localhost:8801/ )
NOTE:
Use docker pull iworkflowio/iwf-server:latest to update the latest image.Or update the docker-compose file to specify the version tag.
- Run
make binsto build the binaryiwf-server - Then run
./iwf-server startto run the service . This defaults to serve workflows APIs with Temporal interpreter implementation. It requires to have local Temporal setup. See Run with local Temporal. - Alternatively, run
./iwf-server --config config/development_cadence.yaml startto run with local Cadence. See below instructions for setting up local Cadence. - Run
make integTeststo run all integration tests. This by default requires to have both local Cadence and Temporal to be set up.
You can customize the docker image, or just use the api and interpreter that are exposed as the api service and workflow service.
For more info, contact [email protected]
Migrating from Cadence/Temporal is simple and easy. It's only possible to migrate new workflow executions. Let your applications to only start new workflows in iWF. For the existing running workflows in Cadence/Temporal, keep the Cadence/Temporal workers until they are finished.
Wait, what? There is no activity at all in iWF?
Yes, iWF workflows are essentially a REST service and all the activity code in Cadence/Temporal can just move in iWF workflow code -- start or decide API of WorkflowState.
A main reason that many people use Cadence/Temporal activity is to take advantage of the history showing input/output in WebUI. This is handy for debugging/troubleshooting.
iWF provides a RecordEvent API to mimic. It can be called with any arbitrary data, and they will be recorded into history just for debugging/troubleshooting.
Depends on different SDKs of Cadence/Temporal, there are different APIs like SignalMethod/SignalChannel/SignalHandler etc. In iWF, just use SignalCommand as equivalent.
In some use cases, you may have multiple signals commands and use AnyCommandCompleted decider trigger type to wait for any command completed.
There are different timer APIs in Cadence/Temporal depends on which SDK:
- workflow.Sleep(duration)
- workflow.Await(duration, condition)
- workflow.NewTimer(duration)
- ...
In iWF, just use TimerCommand as equivalent.
Again in some use cases, you may combine signal/timer commands and use AnyCommandCompleted decider trigger type to wait for any command completed.
Depends on different SDKs of Cadence/Temporal, there are different APIs like QueryHandler/QueryMethod/etc.
In iWF, use DataObjects as equivalent. Unlike Cadence/Temporal, DataObjects should be explicitly defined in WorkflowDefinition.
Note that by default all the DataObjects and SearchAttributes will be loaded into any WorkflowState as LOAD_ALL_WITHOUT_LOCKING persistence loading policy.
This could be a performance issue if there are too many big items. Consider using different loading policy like LOAD_PARTIAL_WITHOUT_LOCKING to improve by customizing the WorkflowStateOptions.
Also note that DataObjects are not just for returning data to API, but also for sharing data across different StateExecutions. But if it's just to share data from start API to decide API in the same StateExecution, using StateLocal is preferred for efficiency reason.
iWF has the same concepts of Search Attribute. Unlike Cadence/Temporal, SearchAttribute should be explicitly defined in WorkflowDefinition.
There is no versioning API at all in iWF!
As there is no replay at all for iWF workflow applications, there is no non-deterministic errors or versioning API. All workflow state executions are stored in Cadence/Temporal activities of the interpreter workflow activities.
Workflow code change will always apply to any running existing and new workflow executions once deployed. This gives superpower and flexibility to maintain long-running business applications.
However, making workflow code change will still have backward-compatibility issue like all other microservice applications. Below are the standard ways to address the issues:
- It's rare but if you don't want old workflows to execute the new code, use a flag in new executions to branch out. For example, if changing flow
StateA->StateBtoStateA->StateConly for new workflows, then set a new flag in the new workflow so that StateA can decide go to StateB or StateC. - Removing state code could cause errors(state not found) if there is any state execution still running. For example, after changed
StateA->StateBtoStateA->StateC, you may want to delete StateB. If a StateExecution stays at StateB(most commonly waiting for commands to complete), deleting StateB will cause a not found error when StateB is executed.- The error will be gone if you add the StateB back. Because by default, all State APIs will be backoff retried forever.
- If you want to delete StateB as early as possible, use
IwfWorkflowTypeandIwfExecutingStateIdssearch attributes to confirm if there is any workflows still running at the state. These are built-in search attributes from iWF server.
In Cadence/Temporal, multi-threading is powerful for complicated applications. But the APIs are hard to understand, to use, and to debug. Especially each language/SDK has its own set of APIs without much consistency.
In iWF, there are just a few concepts that are very straightforward:
- The
decideAPI can go to multiple next states. All next states will be executed in parallel decideAPI can also go back to any previous StateId, or the same StateId, to form a loop. The StateExecutionId is the unique identifier.- Use
InterStateChannelfor synchronization communication. It's just like a signal channel that works internally.
Some notes:
- Any state can decide to complete or fail the workflow, or just go to a dead end(no next state).
- Because of above, there could be zero, or more than one state completing with data as workflow results.
- To get multiple state results from a workflow execution, use the special API
getComplexWorkflowResultof client API.
Check Client APIs for all the APIs that are equivalent to Cadence/Temporal client APIs.
Features like IdReusePolicy, CronSchedule, RetryPolicy are also supported in iWF.
What's more, there are features that are impossible in Cadence/Temporal are provided like reset workflow by StateId or StateExecutionId. Because WorkflowState are explicitly defined, resetting API is a lot more friendly to use.
Is that all? For now yes. We believe these are all you need to migrate to iWF from Cadence/Temporal.
The main philosophy of iWF is providing simple and easy to understand APIs to users(as minimist), as apposed to the complicated and also huge number APIs in Cadence/Temporal.
So what about something else like:
- Timeout and backoff retry: State start/decide APIs have default timeout and infinite backoff retry. You can customize in StateOptions.
- ChildWorkflow can be replaced with regular workflow + signal. See this StackOverflow for why.
- SignalWithStart: Use start + signal API will be the same except for more exception handling work. We have seen a lot of people don't know how to use it correctly in Cadence/Temporal. We will consider provide it in a better way in the future.
- ContinueAsNew: this is missing in iWF for now. But as the philosophy of hiding internal details, we will implement it in a way that is transparent to user workflows. Internally the interpreter workflow can continueAsNew without letting iWF user workflow to know.
- Long-running activity with stateful recovery(heartbeat details): this is indeed a good one that we want to add. But we don't see Cadence/Temporal activity is very commonly used yet. Please leave your message if you are in a need.
If you believe there is something else you really need, open a ticket or join us in the discussion.
There are two components for iWF server: API service and interpreter worker service.
For API service, set up monitors/dashboards:
- API availability
- API latency
The interpreter worker service is just a standard Cadence/Temporal workflow application. Follow the developer guides:
- For Cadence to set up monitor/dashboards
- For Temporal to set up monitor/dashboards and metrics definition
As you may realize, iWF application is a typical REST microservice. You just need the standard ways to operate it.
Usually, you need to set up monitors/dashboards:
- API availability
- API latency
When something goes wrong in your applications, here are the tips:
- Let your worker service return error stacktrace as the response body to iWF server. E.g. like this example of Spring Boot using ExceptionHandler.
- If you return the full stacktrace in response body, the pending activity view will show it to you! Then use Cadence/Temporal WebUI to debug your application.
- All the input/output to your workflow are stored in the activity input/output of history event. The input is in
ActivityTaskScheduledEvent, output is inActivityTaskCompletedEventor in pending activity view if having errors.
- Start workflow API
- Executing
start/decideAPIs and completing workflow - Parallel execution of multiple states
- Timer command
- Signal command
- SearchAttributeRW
- DataObjectRW
- StateLocal
- Signal workflow API
- Get DataObjects/SearchAttributes API
- Get workflow info API
- Search workflow API
- Stop workflow API
- Reset workflow API
- Command type(s) for inter-state communications (e.g. internal channel)
- AnyCommandCompleted Decider trigger type
- More workflow start options: IdReusePolicy, cron schedule, retry
- StateOption: Start/Decide API timeout and retry policy
- Reset workflow by stateId or stateExecutionId
- StateOption.PersistenceLoadingPolicy: LOAD_PARTIAL_WITHOUT_LOCKING
- More Search attribute types: Datetime, double, bool, keyword array, text
- More workflow start options: initial search attributes
- Auto ContinueAsNew
- WaitForMoreResults in StateDecision
- Skip timer API for testing/operation
- LongRunningActivityCommand
- Decider trigger type: AnyCommandClosed
- Failing workflow details
- StateOption.PersistenceLoadingPolicy: LOAD_ALL_WITH_EXCLUSIVE_LOCK and LOAD_PARTIAL_WITH_EXCLUSIVE_LOCK
AWS published SWF in 2012 and then moved to Step Functions in 2016 because they found it’s too hard to support SWF. Cadence & Temporal continued the idea of SWF and became much more powerful. However, AWS is right that the programming of SWF/Cadence/Temporal is hard to adopt because of leaking too many internals. Inspired by Step Function, iWF is created to provide equivalent power of Cadence/Temporal, but hiding all the internal details and provide clean and simple API to use.
Read this doc for more.

