Thanks to visit codestin.com
Credit goes to pkg.go.dev

workflow

package module
v0.3.9 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 12, 2025 License: BSD-3-Clause Imports: 27 Imported by: 19

README

Workflow Logo
Go Reference Mentioned in Awesome Go

Workflow

The type-safe, event-driven workflow orchestration library that scales with your business.

Build robust, distributed workflows in Go with compile-time safety, automatic retries, and horizontal scaling out of the box.

// Define your business logic as type-safe state machines
b := workflow.NewBuilder[Order, OrderStatus]("order-processing")
b.AddStep(OrderCreated, ProcessPayment, PaymentProcessed)
b.AddStep(PaymentProcessed, FulfillOrder, OrderCompleted)

wf := b.Build(kafkaStreamer, sqlStore, roleScheduler)

Why Choose Workflow?

🎯 Type-Safe by Design

Unlike other orchestrators, Workflow leverages Go generics for compile-time guarantees. Catch errors before deployment, not in production.

// Your IDE knows exactly what data flows where
func processPayment(ctx context.Context, r *workflow.Run[Order, OrderStatus]) (OrderStatus, error) {
    // r.Object is typed as *Order, OrderStatus is your enum
    // Compiler catches mismatches before they cause runtime errors
}
Event-Driven Architecture

Built for modern distributed systems. Steps communicate through durable events, enabling:

  • Loose coupling between workflow components
  • Automatic retries with exponential backoff
  • Horizontal scaling across multiple instances
  • Fault tolerance that survives network partitions
🔧 Infrastructure Agnostic

Your choice of database, message queue, and coordination service. Start simple, scale when needed:

// Development: Everything in-memory
wf := b.Build(memstreamer.New(), memrecordstore.New(), memrolescheduler.New())

// Production: Battle-tested infrastructure
wf := b.Build(kafkastreamer.New(), sqlstore.New(), rinkrolescheduler.New())
📊 Built-in Observability

Production-ready monitoring without the setup overhead:

  • Prometheus metrics for throughput, latency, and error rates
  • Web UI for real-time workflow visualization
  • Structured logging with correlation IDs
  • Distributed tracing support

Perfect For

  • Order Processing: Payment, inventory, fulfillment pipelines
  • User Onboarding: Multi-step verification and activation flows
  • Financial Operations: Transaction processing with compliance checks
  • Data Processing: ETL pipelines with validation and cleanup
  • Approval Workflows: Multi-stakeholder review processes

vs. The Alternatives

Feature Workflow Temporal Zeebe/Camunda
Type Safety ✅ Compile-time (Go generics) ❌ Runtime validation ❌ Runtime (BPMN)
Architecture ✅ Event-driven state machines ⚠️ RPC-based activities ⚠️ Token-based execution
Infrastructure ✅ Your choice (adapters) ❌ Requires Temporal cluster ❌ Requires external engine
Deployment ✅ Library in your app ❌ Separate server/workers ❌ Separate engine
Learning Curve ✅ Native Go patterns ⚠️ New concepts & SDKs ❌ BPMN modeling
Language ✅ Go-native ⚠️ Multi-language via gRPC ⚠️ Multi-language

Quick Start

go get github.com/luno/workflow
package main

import (
    "context"
    "fmt"
    "github.com/luno/workflow"
    "github.com/luno/workflow/adapters/memstreamer"
    "github.com/luno/workflow/adapters/memrecordstore"
    "github.com/luno/workflow/adapters/memrolescheduler"
)

type TaskStatus int
const (
    TaskStatusUnknown   TaskStatus = 0
    TaskStatusCreated   TaskStatus = 1
    TaskStatusProcessed TaskStatus = 2
    TaskStatusCompleted TaskStatus = 3
)

type Task struct {
    ID   string
    Name string
}

func main() {
    b := workflow.NewBuilder[Task, TaskStatus]("task-processor")

    b.AddStep(TaskStatusCreated, func(ctx context.Context, r *workflow.Run[Task, TaskStatus]) (TaskStatus, error) {
        fmt.Printf("Processing: %s\n", r.Object.Name)
        return TaskStatusProcessed, nil
    }, TaskStatusProcessed)

    b.AddStep(TaskStatusProcessed, func(ctx context.Context, r *workflow.Run[Task, TaskStatus]) (TaskStatus, error) {
        fmt.Printf("Completed: %s\n", r.Object.Name)
        return TaskStatusCompleted, nil
    }, TaskStatusCompleted)

    wf := b.Build(memstreamer.New(), memrecordstore.New(), memrolescheduler.New())

    ctx := context.Background()
    wf.Run(ctx)
    defer wf.Stop()

    // Trigger a workflow
    runID, _ := wf.Trigger(ctx, "task-1", workflow.WithInitialValue(&Task{
        ID: "task-1",
        Name: "Process Invoice",
    }))

    // Wait for completion
    wf.Await(ctx, "task-1", runID, TaskStatusCompleted)
    fmt.Println("✅ Workflow completed!")
}

Enterprise Ready

Workflow provides enterprise-grade features:

  • Exactly-once processing guarantees via transactional outbox pattern
  • Built-in error handling with pause and retry mechanisms
  • Comprehensive observability via Prometheus metrics and Web UI
  • Horizontal scaling through role-based scheduling
  • Infrastructure flexibility via pluggable adapters
  • Production deployment patterns for various scales

Documentation

Topic Description
Getting Started Install and build your first workflow
Core Concepts Understand Runs, Events, and State Machines
Architecture Deep dive into system design and components
Steps Build workflow logic with step functions
Callbacks Handle external events and webhooks
Timeouts Add time-based operations
Connectors Integrate with external event streams
Hooks React to workflow lifecycle changes
Configuration Tune performance and behavior
Monitoring Observability and debugging
Adapters Infrastructure integration guide
Examples & Tutorials
Example Description
Order Processing Complete e-commerce workflow with payments & fulfillment

Community & Support

Installation

go get github.com/luno/workflow

# Production adapters (install as needed)
go get github.com/luno/workflow/adapters/kafkastreamer
go get github.com/luno/workflow/adapters/sqlstore
go get github.com/luno/workflow/adapters/rinkrolescheduler
go get github.com/luno/workflow/adapters/webui

License

MIT License


Ready to build reliable workflows? Get started in 5 minutes →

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrRecordNotFound       = errors.New("record not found")
	ErrTimeoutNotFound      = errors.New("timeout not found")
	ErrWorkflowInProgress   = errors.New("current workflow still in progress - retry once complete")
	ErrOutboxRecordNotFound = errors.New("outbox record not found")
	ErrInvalidTransition    = errors.New("invalid transition")
)

Functions

func AwaitTimeoutInsert

func AwaitTimeoutInsert[Type any, Status StatusType](
	t testing.TB,
	api API[Type, Status],
	foreignID, runID string,
	waitFor Status,
)

func CreateDiagram added in v0.2.0

func CreateDiagram[Type any, Status StatusType](a API[Type, Status], path string, d MermaidDirection) error

CreateDiagram creates a diagram in a md file for communicating a workflow's set of steps in an easy-to-understand manner.

func DeleteTopic added in v0.1.2

func DeleteTopic(workflowName string) string

func FilterConnectorEventUsing added in v0.1.2

func FilterConnectorEventUsing(e *ConnectorEvent, filters ...ConnectorEventFilter) bool

func FilterUsing added in v0.1.2

func FilterUsing(e *Event, filters ...EventFilter) bool

func MakeFilter added in v0.1.2

func MakeFilter(filters ...RecordFilter) *recordFilters

func Marshal

func Marshal[T any](t *T) ([]byte, error)

Marshal create a single point of change if the encoding changes.

func Require

func Require[Type any, Status StatusType](
	t testing.TB,
	api API[Type, Status],
	foreignID string,
	waitForStatus Status,
	expected Type,
)

func RunStateChangeTopic added in v0.2.0

func RunStateChangeTopic(workflowName string) string

func Topic

func Topic(workflowName string, statusType int) string

func TriggerCallbackOn

func TriggerCallbackOn[Type any, Status StatusType, Payload any](
	t testing.TB,
	api API[Type, Status],
	foreignID, runID string,
	waitForStatus Status,
	p Payload,
)

func Unmarshal

func Unmarshal[T any](b []byte, t *T) error

Unmarshal create a single point of change if the decoding changes.

func WaitFor added in v0.2.0

func WaitFor[Type any, Status StatusType](
	t testing.TB,
	api API[Type, Status],
	foreignID string,
	fn func(r *Run[Type, Status]) (bool, error),
)

Types

type API

type API[Type any, Status StatusType] interface {
	// Name returns the name of the implemented workflow.
	Name() string

	// Trigger will kickstart a workflow Run for the provided foreignID starting from the default entrypoint to
	// the workflow which is the first "from" status added via the builder
	// (e.g. builder.AddStep(FromStatus, func{}, ToStatus). There is no limitation as to where you start the workflow
	// from and can do so via the WithStartAt trigger option. WithInitialValue should be used when you need data to be
	// present in the workflow Run before it starts. This can be used to reduce the need for duplicating reads.
	//
	// foreignID should not be random and should be deterministic for the thing that you are running the workflow for.
	// This especially helps when connecting other workflows as the foreignID is the only way to connect the streams. The
	// same goes for Callback as you will need the foreignID to connect the callback back to the workflow instance that
	// was run.
	Trigger(
		ctx context.Context,
		foreignID string,
		opts ...TriggerOption[Type, Status],
	) (runID string, err error)

	// Schedule takes a cron spec and will call Trigger at the specified intervals. Schedule is a blocking call and all
	// schedule errors will be retried indefinitely. The same options are available for Schedule as they are
	// for Trigger.
	Schedule(foreignID string, spec string, opts ...ScheduleOption[Type, Status]) error

	// Await is a blocking call that returns the typed Run when the workflow of the specified run ID reaches the
	// specified status.
	Await(ctx context.Context, foreignID, runID string, status Status, opts ...AwaitOption) (*Run[Type, Status], error)

	// Callback can be used if Builder.AddCallback has been defined for the provided status. The data in the reader
	// will be passed to the CallbackFunc that you specify and so the serialisation and deserialisation is in the
	// hands of the user.
	Callback(ctx context.Context, foreignID string, status Status, payload io.Reader) error

	// Run must be called in order to start up all the background consumers / consumers required to run the workflow. Run
	// only needs to be called once. Any subsequent calls to run are safe and are noop.
	Run(ctx context.Context)

	// Stop tells the workflow to shut down gracefully.
	Stop()
}

type Ack

type Ack func() error

Ack is used for the event streamer to safeUpdate its cursor of what messages have been consumed. If Ack is not called then the event streamer, depending on implementation, will likely not keep track of which records / events have been consumed.

type AwaitOption

type AwaitOption func(o *awaitOpts)

func WithAwaitPollingFrequency added in v0.1.2

func WithAwaitPollingFrequency(d time.Duration) AwaitOption

type BuildOption

type BuildOption func(w *buildOptions)

func DisablePauseRetry added in v0.2.6

func DisablePauseRetry() BuildOption

DisablePauseRetry sets disables the automatic retries of paused records. Paused records will result in no new workflow runs being able to be triggered for the provided foreign ID.

func WithClock

func WithClock(c clock.Clock) BuildOption

WithClock allows the configuring of workflow's use and access of time. Instead of using time.Now() and other associated functionality from the time package a clock is used instead in order to make it testable.

func WithCustomDelete added in v0.1.2

func WithCustomDelete[Type any](fn func(object *Type) error) BuildOption

WithCustomDelete allows for specifying a custom deleter function for scrubbing PII data when a workflow Run enters RunStateRequestedDataDeleted and is the function that once executed successfully allows for the RunState to move to RunStateDataDeleted.

func WithDebugMode

func WithDebugMode() BuildOption

WithDebugMode enabled debug mode for a workflow which results in increased logs such as when processes ar launched, shutdown, events are skipped etc.

func WithDefaultOptions added in v0.1.2

func WithDefaultOptions(opts ...Option) BuildOption

WithDefaultOptions applies the provided options to the entire workflow and not just to an individual process.

func WithErrorCounter added in v0.3.7

func WithErrorCounter(ec ErrorCounter) BuildOption

WithErrorCounter allows for specifying a custom error counter. The default is errorcounter.New().

func WithLogger added in v0.2.0

func WithLogger(l Logger) BuildOption

WithLogger allows for specifying a custom logger. The default is to use a wrapped version of log/slog's Logger.

func WithOutboxOptions added in v0.3.0

func WithOutboxOptions(opts ...OutboxOption) BuildOption

func WithPauseRetry added in v0.2.6

func WithPauseRetry(resumeAfter time.Duration) BuildOption

WithPauseRetry sets custom retry parameters for all paused records. The default is set to retry records that have been paused for an hour and will process in batches of 10 records at a time as to slowly introduce consumption.

Parameters: - resumeAfter refers to the time that must elapse before a paused record is included in a cycle.

func WithTimeoutStore added in v0.1.2

func WithTimeoutStore(s TimeoutStore) BuildOption

WithTimeoutStore allows the configuration of a TimeoutStore which is required when using timeouts in a workflow. It is not required by default as timeouts are less common of a feature requirement but when needed the abstraction of complexity of handling scheduling, expiring, and executing are incredibly useful and is included as one of the three key feature offerings of workflow which are sequential steps, callbacks, and timeouts.

func WithoutOutbox added in v0.3.4

func WithoutOutbox() BuildOption

WithoutOutbox disables the polling of the RecordStore outbox for pushing events to the provided EventStreamer and allows for external submission of outbox messages to the EventStreamer. This is useful when the workflow uses a record store that performs its own outbox purging, typically when the record store is centralised and shared across multiple workflows and services.

type Builder

type Builder[Type any, Status StatusType] struct {
	// contains filtered or unexported fields
}

func NewBuilder

func NewBuilder[Type any, Status StatusType](name string) *Builder[Type, Status]

func (*Builder[Type, Status]) AddCallback

func (b *Builder[Type, Status]) AddCallback(from Status, fn CallbackFunc[Type, Status], allowedDestinations ...Status)

func (*Builder[Type, Status]) AddConnector

func (b *Builder[Type, Status]) AddConnector(
	name string,
	csc ConnectorConstructor,
	cf ConnectorFunc[Type, Status],
) *connectorUpdater[Type, Status]

func (*Builder[Type, Status]) AddStep

func (b *Builder[Type, Status]) AddStep(
	from Status,
	c ConsumerFunc[Type, Status],
	allowedDestinations ...Status,
) *stepUpdater[Type, Status]

func (*Builder[Type, Status]) AddTimeout

func (b *Builder[Type, Status]) AddTimeout(
	from Status,
	timer TimerFunc[Type, Status],
	tf TimeoutFunc[Type, Status],
	allowedDestinations ...Status,
) *timeoutUpdater[Type, Status]

func (*Builder[Type, Status]) Build

func (b *Builder[Type, Status]) Build(
	eventStreamer EventStreamer,
	recordStore RecordStore,
	roleScheduler RoleScheduler,
	opts ...BuildOption,
) *Workflow[Type, Status]

func (*Builder[Type, Status]) OnCancel added in v0.2.0

func (b *Builder[Type, Status]) OnCancel(hook RunStateChangeHookFunc[Type, Status])

func (*Builder[Type, Status]) OnComplete added in v0.2.0

func (b *Builder[Type, Status]) OnComplete(hook RunStateChangeHookFunc[Type, Status])

func (*Builder[Type, Status]) OnPause added in v0.2.0

func (b *Builder[Type, Status]) OnPause(hook RunStateChangeHookFunc[Type, Status])

type CallbackFunc

type CallbackFunc[Type any, Status StatusType] func(ctx context.Context, r *Run[Type, Status], reader io.Reader) (Status, error)

type ConnectorConstructor added in v0.1.2

type ConnectorConstructor interface {
	Make(ctx context.Context, consumerName string) (ConnectorConsumer, error)
}

type ConnectorConsumer added in v0.1.2

type ConnectorConsumer interface {
	Recv(ctx context.Context) (*ConnectorEvent, Ack, error)
	Close() error
}

type ConnectorEvent added in v0.1.2

type ConnectorEvent struct {
	// ID is a unique ID for the event.
	ID string
	// ForeignID refers to the ID of the element that the event relates to.
	ForeignID string
	// Type relates to the StatusType that the associated record changed to.
	Type string
	// Headers stores meta-data in a simple and easily queryable way.
	Headers map[string]string
	// CreatedAt is the time that the event was produced and is generated by the event streamer.
	CreatedAt time.Time
}

ConnectorEvent defines a schema that is inline with how workflow uses an event notification pattern. This means that events only tell us what happened and do not transmit the state change. ConnectorEvent differs slightly from Event in that all fields, except for CreatedAt, are string based and allows representation relations to elements with string identifiers and string based types.

type ConnectorEventFilter added in v0.1.2

type ConnectorEventFilter func(e *ConnectorEvent) bool

ConnectorEventFilter can be passed to the event streaming implementation to allow specific consumers to have an earlier on filtering process. True is returned when the event should be skipped.

type ConnectorFunc

type ConnectorFunc[Type any, Status StatusType] func(ctx context.Context, api API[Type, Status], e *ConnectorEvent) error

type ConsumerFunc

type ConsumerFunc[Type any, Status StatusType] func(ctx context.Context, r *Run[Type, Status]) (Status, error)

ConsumerFunc provides a record that is expected to be modified if the data needs to change. If true is returned with a nil error then the record, along with its modifications, will be stored. If false is returned with a nil error then the record will not be stored and the event will be skipped and move onto the next event. If a non-nil error is returned then the consumer will back off and try again until a nil error occurs or the retry max has been reached if a Dead Letter Queue has been configured for the workflow.

type ErrorCounter added in v0.3.7

type ErrorCounter interface {
	Add(err error, labels ...string) int
	Count(err error, labels ...string) int
	Clear(err error, labels ...string)
}

ErrorCounter defines an interface for counting occurrences of errors with optional labels.

type Event

type Event struct {
	// ID is a unique ID for the event generated by the event streamer.
	ID int64

	// ForeignID refers to the ID of a record in the record store.
	ForeignID string

	// Type relates to the StatusType that the associated record changed to.
	Type int

	// Headers stores meta-data in a simple and easily queryable way.
	Headers map[Header]string

	// CreatedAt is the time that the event was produced and is generated by the event streamer.
	CreatedAt time.Time
}

type EventFilter

type EventFilter func(e *Event) bool

EventFilter can be passed to the event streaming implementation to allow specific consumers to have an earlier on filtering process. True is returned when the event should be skipped.

type EventReceiver added in v0.2.6

type EventReceiver interface {
	Recv(ctx context.Context) (*Event, Ack, error)
	Close() error
}

EventReceiver defines the common interface that the EventStreamer adapter must implement for allowing the workflow to receive events.

type EventSender added in v0.2.6

type EventSender interface {
	Send(ctx context.Context, foreignID string, statusType int, headers map[Header]string) error
	Close() error
}

EventSender defines the common interface that the EventStreamer adapter must implement for allowing the workflow to send events to the event streamer.

type EventStreamer

type EventStreamer interface {
	NewSender(ctx context.Context, topic string) (EventSender, error)
	NewReceiver(ctx context.Context, topic string, name string, opts ...ReceiverOption) (EventReceiver, error)
}

EventStreamer defines the event streaming adapter interface / api and all implementations should all be tested with adaptertest.TestEventStreamer to ensure the behaviour is compatible with workflow.

type Filter added in v0.2.6

type Filter struct {
	Enabled      bool
	IsMultiMatch bool
	// contains filtered or unexported fields
}

func (Filter) Matches added in v0.2.6

func (f Filter) Matches(findValue string) bool

func (Filter) MultiValues added in v0.2.6

func (f Filter) MultiValues() []string

func (Filter) Value added in v0.2.6

func (f Filter) Value() string

type FilterTime added in v0.3.3

type FilterTime struct {
	Enabled bool
	// contains filtered or unexported fields
}

func (FilterTime) Matches added in v0.3.3

func (f FilterTime) Matches(findValue time.Time) bool

func (FilterTime) Value added in v0.3.3

func (f FilterTime) Value() time.Time
type Header string
const (
	HeaderWorkflowName  Header = "workflow_name"
	HeaderForeignID     Header = "foreign_id"
	HeaderTopic         Header = "topic"
	HeaderRunID         Header = "run_id"
	HeaderRunState      Header = "run_state"
	HeaderRecordVersion Header = "record_version"
	HeaderConnectorData Header = "connector_data"
)

type Logger added in v0.2.0

type Logger interface {
	// Debug will be used by workflow for debug logs when in debug mode.
	Debug(ctx context.Context, msg string, meta map[string]string)
	// Error is used when writing errors to the logs.
	Error(ctx context.Context, err error)
}

Logger interface allows the user of Workflow to provide a custom logger and not use the default which is provided in internal/logger. Workflow only writes two types of logs: Debug and Error. Error is only used at the highest level where an auto-retry process (consumers and pollers) errors and retries.

Error is used only when the error cannot be passed back to the caller and cannot be bubbled up any further.

Debug is used only when the Workflow is built with WithDebugMode.

type MermaidDirection

type MermaidDirection string
const (
	UnknownDirection     MermaidDirection = ""
	TopToBottomDirection MermaidDirection = "TB"
	LeftToRightDirection MermaidDirection = "LR"
	RightToLeftDirection MermaidDirection = "RL"
	BottomToTopDirection MermaidDirection = "BT"
)

type MermaidFormat

type MermaidFormat struct {
	WorkflowName   string
	Direction      MermaidDirection
	Nodes          []int
	StartingPoints []int
	TerminalPoints []int
	Transitions    []MermaidTransition
}

type MermaidTransition

type MermaidTransition struct {
	From int
	To   []int
}

type Meta added in v0.3.0

type Meta struct {
	// RunStateReason provides a human-readable explanation for the current run state.
	// This field helps to understand the cause behind the current state of the run, such as "Paused", "Canceled", or "Deleted".
	// For instance, calling functions like Pause, Cancel, or DeleteData will update this field with a descriptive reason
	// that explains why the run is in its current state, making it easier to understand the context behind the state transition.
	RunStateReason string

	// StatusDescription provides a human-readable version of the Status field (int).
	// It allows for better understanding and readability of the status at the time of the computation, especially when reviewing
	// historical data. While the `Status` field stores the status as an integer, `StatusDescription` maps that status to a
	// corresponding string description, making it easier to interpret the status without needing to refer to status codes.
	StatusDescription string

	// Version defines the version of the record. The Workflow increments this value before storing it in the
	// RecordStore and uses it for data validation when consuming an event. The event will contain a matching version,
	// ensuring that the event and the record from the RecordStore align, and that the data is not stale. This versioning
	// mechanism also helps address replication lag issues, particularly in systems using databases that experience
	// replication delays between replicas. By matching the version, Workflow ensures that the version collected from
	// the reader is indeed the version expected. If the version in the RecordStore is greater than the event can be
	// ignored as it has already been processed.
	Version uint

	// TraceOrigin contains a trace of the origin or source where the workflow Run was triggered from.
	// It provides a line trace or path that helps track where the execution was initiated,
	// offering context for debugging or auditing purposes by capturing the origin of the workflow trigger.
	TraceOrigin string
}

Meta provides contextual information such as the string value of the Status at the given time of the last update and the reason the RunState is it's current value is uncommon or the need to know enables debugging. This is particularly useful when accessing the data of the record without the ability to cast it to a TypedRecord which is often the case when debugging the data via the RecordStore.

type Option added in v0.1.2

type Option func(so *options)

func ConsumeLag added in v0.1.2

func ConsumeLag(d time.Duration) Option

ConsumeLag defines the age of the event that the consumer will consume. The workflow consumer will not consume events newer than the time specified and will wait to consume them.

func ErrBackOff added in v0.1.2

func ErrBackOff(d time.Duration) Option

ErrBackOff defines the time duration of the backoff of the workflow process when an error is encountered.

func LagAlert added in v0.1.2

func LagAlert(d time.Duration) Option

LagAlert defines the time duration / threshold before the prometheus metric defined in /internal/metrics/metrics.go switches to true which means that the workflow consumer is struggling to consume events fast enough and might need to be converted to a parallel consumer.

func ParallelCount added in v0.1.2

func ParallelCount(instances int) Option

ParallelCount defines the number of instances of the workflow process. The processes are shareded consistently and will be provided a name such as "consumer-1-of-5" to show the instance number and the total number of instances that the process is a part of.

func PauseAfterErrCount added in v0.1.2

func PauseAfterErrCount(count int) Option

PauseAfterErrCount defines the number of times an error can occur until the record is updated to RunStatePaused which is similar to a Dead Letter Queue in the sense that the record will no longer be processed and won't block the workflow's consumers and can be investigated and retried later on.

func PollingFrequency added in v0.1.2

func PollingFrequency(d time.Duration) Option

PollingFrequency defines the time duration of which the workflow process will poll for changes.

type OrderType added in v0.1.2

type OrderType int
const (
	OrderTypeUnknown    OrderType = 0
	OrderTypeAscending  OrderType = 1
	OrderTypeDescending OrderType = 2
)

func (OrderType) String added in v0.1.2

func (ot OrderType) String() string

type OutboxEvent added in v0.1.2

type OutboxEvent struct {
	// ID is a unique ID for this specific OutboxEvent.
	ID string

	// WorkflowName refers to the name of the workflow that the OutboxEventData belongs to.
	WorkflowName string

	// Data represents a slice of bytes the OutboxEventDataMaker constructs via serialising event data
	// in an expected way for it to also be deserialized by the outbox consumer.
	Data []byte

	// CreatedAt is the time that this specific OutboxEvent was produced.
	CreatedAt time.Time
}

type OutboxEventData added in v0.1.2

type OutboxEventData struct {
	ID string

	// WorkflowName refers to the name of the workflow that the OutboxEventData belongs to.
	WorkflowName string

	// Data represents a slice of bytes the OutboxEventDataMaker constructs via serialising event data
	// in an expected way for it to also be deserialized by the outbox consumer.
	Data []byte
}

func MakeOutboxEventData added in v0.2.0

func MakeOutboxEventData(record Record) (OutboxEventData, error)

MakeOutboxEventData creates a OutboxEventData that houses all the information that must be stored and be retrievable from the outbox.

type OutboxOption added in v0.3.0

type OutboxOption func(w *outboxConfig)

func OutboxErrBackOff added in v0.3.0

func OutboxErrBackOff(d time.Duration) OutboxOption

func OutboxLagAlert added in v0.3.0

func OutboxLagAlert(d time.Duration) OutboxOption

func OutboxLookupLimit added in v0.3.0

func OutboxLookupLimit(limit int64) OutboxOption

func OutboxPollingFrequency added in v0.3.0

func OutboxPollingFrequency(d time.Duration) OutboxOption

type ReceiverOption added in v0.2.6

type ReceiverOption func(*ReceiverOptions)

func StreamFromLatest added in v0.2.6

func StreamFromLatest() ReceiverOption

StreamFromLatest tells the event streamer to start streaming events from the most recent event if there is no commited/stored offset (cursor for some event streaming platforms). If a consumer has received events before then this should have no affect and consumption should resume from where it left off previously.

func WithReceiverPollFrequency added in v0.2.6

func WithReceiverPollFrequency(d time.Duration) ReceiverOption

type ReceiverOptions added in v0.2.6

type ReceiverOptions struct {
	PollFrequency    time.Duration
	StreamFromLatest bool
}

type Record

type Record struct {
	// WorkflowName is the name of the workflow associated with this record. It helps to identify which workflow
	// this record belongs to.
	WorkflowName string

	// ForeignID is an external identifier for this record, often used to associate it with an external system or
	// resource. This can be useful for linking data across systems.
	ForeignID string

	// RunID uniquely identifies the specific execution or instance of the workflow Run and is a UUID v4.
	// It allows tracking of different runs of the same workflow and foreign id.
	RunID string

	// RunState represents the current state of the workflow Run (e.g., running, paused, completed).
	// This field is important for understanding the lifecycle status of the run.
	RunState RunState

	// Status is an integer representing the numerical status code of the record.
	// Terminal, or end, statuses are written as past tense as they are fact and cannot change but present continuous
	// tense is used to describe the current process taking place (e.g. submitting, creating, storing)
	Status int

	// Object contains the actual data associated with the workflow Run, serialized as bytes.
	// This could be any kind of structured data related to the workflow execution. To unmarshal this data Unmarshal
	// must be used to do so and the type matches the generic type called "Type" provided to the workflow builder
	// ( e.g. NewBuilder[Type any, ...])
	Object []byte

	// CreatedAt is the timestamp when the record was created, providing context on when the workflow Run was initiated.
	CreatedAt time.Time

	// UpdatedAt is the timestamp when the record was last updated. This helps track the most recent changes made to
	// the record.
	UpdatedAt time.Time

	// Meta stores any additional metadata related to the record, such as human-readable reasons for the RunState
	// or other contextual information that can assist with debugging or auditing.
	Meta Meta
}

Record is the cornerstone of Workflow. Record must always be wire compatible with no generics as it's intended purpose is to be the persisted data structure of a Run.

type RecordFilter added in v0.1.2

type RecordFilter func(filters *recordFilters)

func FilterByCreatedAtAfter added in v0.3.3

func FilterByCreatedAtAfter(after time.Time) RecordFilter

func FilterByCreatedAtBefore added in v0.3.3

func FilterByCreatedAtBefore(before time.Time) RecordFilter

func FilterByForeignID added in v0.1.2

func FilterByForeignID(foreignIDs ...string) RecordFilter

func FilterByRunState added in v0.1.2

func FilterByRunState(runStates ...RunState) RecordFilter

func FilterByStatus added in v0.1.2

func FilterByStatus[statusType ~int | ~int8 | ~int16 | ~int32 | ~int64](statuses ...statusType) RecordFilter

type RecordStore

type RecordStore interface {
	// Store should create or update a record depending on whether the underlying store is mutable or append only. Store
	// must implement transactions and a separate outbox store to store the outbox record (that should be
	// generated using MakeOutboxEventData) which can be retrieved when calling ListOutboxEvents and can be
	// deleted when DeleteOutboxEvent is called.
	Store(ctx context.Context, record *Record) error
	Lookup(ctx context.Context, runID string) (*Record, error)
	Latest(ctx context.Context, workflowName, foreignID string) (*Record, error)

	// List provides a slice of Record where the total items will be equal or less than the limit depending
	// on the offset provided and how many records remain after that ID.
	List(ctx context.Context, workflowName string, offsetID int64, limit int, order OrderType, filters ...RecordFilter) ([]Record, error)

	// ListOutboxEvents lists all events that are yet to be published to the event streamer. A requirement for
	// implementation of the RecordStore is to support a Transactional Outbox that has Event's written to it when
	// Store is called.
	ListOutboxEvents(ctx context.Context, workflowName string, limit int64) ([]OutboxEvent, error)
	// DeleteOutboxEvent will expect an Event's ID field and will remove the event from the outbox store when the
	// event has successfully been published to the event streamer.
	DeleteOutboxEvent(ctx context.Context, id string) error
}

RecordStore implementations should all be tested with adaptertest.TestRecordStore. The underlying implementation of store must support transactions or the ability to commit the record and an outbox event in a single call as well as being able to obtain an ID for the record before it is created.

type RoleScheduler

type RoleScheduler interface {
	// Await must return a child context of the provided (parent) context. Await should block until the role is
	// assigned to the caller. Only one caller should be able to be assigned the role at any given time. The returned
	// context.CancelFunc is called after each process execution. Some process executions can be more long living and
	// others not but if any process errors the context.CancelFunc will be called after the specified error backoff
	// has finished.
	Await(ctx context.Context, role string) (context.Context, context.CancelFunc, error)
}

RoleScheduler implementations should all be tested with adaptertest.TestRoleScheduler

type Run added in v0.2.0

type Run[Type any, Status StatusType] struct {
	TypedRecord[Type, Status]
	// contains filtered or unexported fields
}

Run is a representation of a workflow run. It incorporates all the fields from the Record as well as having defined types for the Status and Object fields along with access to the RunStateController which controls the state of the run aka "RunState".

func NewTestingRun added in v0.2.0

func NewTestingRun[Type any, Status StatusType](
	t *testing.T,
	wr Record,
	object Type,
	opts ...TestingRunOption,
) *Run[Type, Status]

NewTestingRun should be used when testing logic that defines a workflow.Run as a parameter. This is usually the case in unit tests and would not normally be found when doing an Acceptance test for the entire workflow.

func (*Run[Type, Status]) Cancel added in v0.2.0

func (r *Run[Type, Status]) Cancel(ctx context.Context, reason string) (Status, error)

Cancel is intended to be used inside a workflow process where (Status, error) are the return signature. This allows the user to simply type "return r.Cancel(ctx)" to cancel a record from inside a workflow which results in the record being permanently left alone and will not be processed.

func (*Run[Type, Status]) Pause added in v0.2.0

func (r *Run[Type, Status]) Pause(ctx context.Context, reason string) (Status, error)

Pause is intended to be used inside a workflow process where (Status, error) are the return signature. This allows the user to simply type "return r.Pause(ctx)" to pause a record from inside a workflow which results in the record being temporarily left alone and will not be processed until it is resumed.

func (*Run[Type, Status]) SaveAndRepeat added in v0.3.5

func (r *Run[Type, Status]) SaveAndRepeat() (Status, error)

func (*Run[Type, Status]) Skip added in v0.2.0

func (r *Run[Type, Status]) Skip() (Status, error)

Skip is a util function to skip the update and move on to the next event (consumer) or execution (callback)

type RunState added in v0.1.2

type RunState int
const (
	RunStateUnknown              RunState = 0
	RunStateInitiated            RunState = 1
	RunStateRunning              RunState = 2
	RunStatePaused               RunState = 3
	RunStateCancelled            RunState = 4
	RunStateCompleted            RunState = 5
	RunStateDataDeleted          RunState = 6
	RunStateRequestedDataDeleted RunState = 7
)

func (RunState) Finished added in v0.1.2

func (rs RunState) Finished() bool

func (RunState) Stopped added in v0.1.2

func (rs RunState) Stopped() bool

Stopped is the type of status that requires consumers to ignore the workflow run as it is in a stopped state. Only paused workflow runs can be resumed and must be done so via the workflow API or the Run methods. All cancelled workflow runs are cancelled permanently and cannot be undone whereas Pausing can be resumed.

func (RunState) String added in v0.1.2

func (rs RunState) String() string

func (RunState) Valid added in v0.1.2

func (rs RunState) Valid() bool

type RunStateChangeHookFunc added in v0.2.0

type RunStateChangeHookFunc[Type any, Status StatusType] func(ctx context.Context, record *TypedRecord[Type, Status]) error

RunStateChangeHookFunc defines the function signature for all hooks associated to the run.

type RunStateController added in v0.1.2

type RunStateController interface {
	// Pause will take the workflow run specified and move it into a temporary state where it will no longer be processed.
	// A paused workflow run can be resumed by calling Resume. ErrUnableToPause is returned when a workflow is not in a
	// state to be paused.
	Pause(ctx context.Context, reason string) error
	// Cancel can be called after Pause has been called. A paused run of the workflow can be indefinitely cancelled.
	// Once cancelled, DeleteData can be called and will move the run into an indefinite state of DataDeleted.
	// ErrUnableToCancel is returned when the workflow record is not in a state to be cancelled.
	Cancel(ctx context.Context, reason string) error
	// Resume can be called on a workflow run that has been paused. ErrUnableToResume is returned when the workflow
	// run is not in a state to be resumed.
	Resume(ctx context.Context) error
	// DeleteData can be called after a workflow run has been completed or cancelled. DeleteData should be used to
	// comply with the right to be forgotten such as complying with GDPR. ErrUnableToDelete is returned when the
	// workflow run is not in a state to be deleted.
	DeleteData(ctx context.Context, reason string) error
}

RunStateController allows the interaction with a specific workflow run.

func NewRunStateController added in v0.1.2

func NewRunStateController(store storeFunc, wr *Record) RunStateController

type ScheduleOption added in v0.1.2

type ScheduleOption[Type any, Status StatusType] func(o *scheduleOpts[Type, Status])

func WithScheduleFilter added in v0.1.2

func WithScheduleFilter[Type any, Status StatusType](
	fn func(ctx context.Context) (bool, error),
) ScheduleOption[Type, Status]

func WithScheduleInitialValue added in v0.1.2

func WithScheduleInitialValue[Type any, Status StatusType](t *Type) ScheduleOption[Type, Status]

type State

type State int
const (
	StateUnknown  State = 0
	StateShutdown State = 1
	StateRunning  State = 2
	StateIdle     State = 3
)

func (State) String added in v0.1.2

func (s State) String() string

type StatusType

type StatusType interface {
	~int | ~int32 | ~int64

	String() string
}

type TestingRecordStore

type TestingRecordStore interface {
	RecordStore

	Snapshots(workflowName, foreignID, runID string) []*Record
}

type TestingRunOption added in v0.2.0

type TestingRunOption func(*testingRunOpts)

func WithCancelFn added in v0.2.0

func WithCancelFn(cancel func(ctx context.Context) error) TestingRunOption

func WithDeleteDataFn added in v0.2.0

func WithDeleteDataFn(deleteData func(ctx context.Context) error) TestingRunOption

func WithPauseFn added in v0.2.0

func WithPauseFn(pause func(ctx context.Context) error) TestingRunOption

func WithResumeFn added in v0.2.0

func WithResumeFn(resume func(ctx context.Context) error) TestingRunOption

func WithSaveAndRepeatFn added in v0.3.5

func WithSaveAndRepeatFn(saveAndRepeat func(ctx context.Context) error) TestingRunOption

type TimeoutFunc

type TimeoutFunc[Type any, Status StatusType] func(ctx context.Context, r *Run[Type, Status], now time.Time) (Status, error)

TimeoutFunc runs once the timeout has expired which is set by TimerFunc. If false is returned with a nil error then the timeout is skipped and not retried at a later date. If a non-nil error is returned the TimeoutFunc will be called again until a nil error is returned. If true is returned with a nil error then the provided record and any modifications made to it will be stored and the status updated - continuing the workflow.

type TimeoutRecord added in v0.1.2

type TimeoutRecord struct {
	ID           int64
	WorkflowName string
	ForeignID    string
	RunID        string
	Status       int
	Completed    bool
	ExpireAt     time.Time
	CreatedAt    time.Time
}

type TimeoutStore

type TimeoutStore interface {
	Create(ctx context.Context, workflowName, foreignID, runID string, status int, expireAt time.Time) error
	Complete(ctx context.Context, id int64) error
	Cancel(ctx context.Context, id int64) error
	List(ctx context.Context, workflowName string) ([]TimeoutRecord, error)
	ListValid(ctx context.Context, workflowName string, status int, now time.Time) ([]TimeoutRecord, error)
}

TimeoutStore implementations should all be tested with adaptertest.TestTimeoutStore

type TimerFunc

type TimerFunc[Type any, Status StatusType] func(ctx context.Context, r *Run[Type, Status], now time.Time) (time.Time, error)

TimerFunc exists to allow the specification of when the timeout should expire dynamically. If not time is set then a timeout will not be created and the event will be skipped. If the time is set then a timeout will be created and once expired TimeoutFunc will be called. Any non-nil error will be retried with backoff.

func DurationTimerFunc

func DurationTimerFunc[Type any, Status StatusType](duration time.Duration) TimerFunc[Type, Status]

func TimeTimerFunc

func TimeTimerFunc[Type any, Status StatusType](t time.Time) TimerFunc[Type, Status]

type TriggerOption

type TriggerOption[Type any, Status StatusType] func(o *triggerOpts[Type, Status])

func WithInitialValue

func WithInitialValue[Type any, Status StatusType](t *Type) TriggerOption[Type, Status]

func WithStartingPoint added in v0.2.6

func WithStartingPoint[Type any, Status StatusType](startingStatus Status) TriggerOption[Type, Status]

type TypedRecord added in v0.2.0

type TypedRecord[Type any, Status StatusType] struct {
	Record
	Status Status
	Object *Type
}

TypedRecord differs from Record in that it contains a Typed Object and Typed Status

type Workflow

type Workflow[Type any, Status StatusType] struct {
	// contains filtered or unexported fields
}

func (*Workflow[Type, Status]) Await

func (w *Workflow[Type, Status]) Await(
	ctx context.Context,
	foreignID, runID string,
	status Status,
	opts ...AwaitOption,
) (*Run[Type, Status], error)

func (*Workflow[Type, Status]) Callback

func (w *Workflow[Type, Status]) Callback(
	ctx context.Context,
	foreignID string,
	status Status,
	payload io.Reader,
) error

func (*Workflow[Type, Status]) Name

func (w *Workflow[Type, Status]) Name() string

func (*Workflow[Type, Status]) Run

func (w *Workflow[Type, Status]) Run(ctx context.Context)

func (*Workflow[Type, Status]) Schedule added in v0.1.2

func (w *Workflow[Type, Status]) Schedule(
	foreignID string,
	spec string,
	opts ...ScheduleOption[Type, Status],
) error

func (*Workflow[Type, Status]) States

func (w *Workflow[Type, Status]) States() map[string]State

func (*Workflow[Type, Status]) Stop

func (w *Workflow[Type, Status]) Stop()

Stop cancels the context provided to all the background processes that the workflow launched and waits for all of them to shut down gracefully.

func (*Workflow[Type, Status]) Trigger

func (w *Workflow[Type, Status]) Trigger(
	ctx context.Context,
	foreignID string,
	opts ...TriggerOption[Type, Status],
) (runID string, err error)

Directories

Path Synopsis
_examples
callback module
connector module
schedule module
timeout module
webui module
adapters
jlog module
kafkastreamer module
redis module
sqlstore module
sqltimeout module
webui module
wredis module
examples module
internal

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL