This repo hosts the source for Jet's dotnet new
templates.
Equinox only
These templates focus solely on Consistent Processing using Equinox Stores:
eqxweb
- Boilerplate for an ASP .NET Core 3 Web App, with an associated storage-independent Domain project using Equinox.eqxwebcs
- Boilerplate for an ASP .NET Core 3 Web App, with an associated storage-independent Domain project using Equinox, ported to C#.eqxtestbed
- Host that allows running back-to-back benchmarks when prototyping models using [Equinox]. (https://github.com/jet/equinox), using different stores and/or store configuration parameters.eqxPatterns
- Equinox Skeleton Deciders and Tests implementing various event sourcing patterns:- Managing a chain of Periods with a Rolling Balance carried forward (aka Closing the Books)
- Feeding items into a List managed as a Series of Epochs with exactly once ingestion logic
Propulsion related
The following templates focus specifically on the usage of Propulsion
components:
-
proProjector
- Boilerplate for a Publisher application that-
consumes events from one of:
-
(default)
--source cosmos
: an Azure CosmosDb ChangeFeedProcessor (typically unrolling events fromEquinox.CosmosStore
stores usingPropulsion.CosmosStore
)-k --parallelOnly
schedule kafka emission to operate in parallel at document (rather than accumulated span of events for a stream) level
-
--source eventStore
: Track an EventStoreDB >= 21.10 instance's$all
feed using the gRPC interface (viaPropulsion.EventStoreDb
) -
--source sqlStreamStore
:SqlStreamStore
's$all
feed -
--source dynamo
-
-
-k
adds publishing to Apache Kafka usingPropulsion.Kafka
.
-
-
proConsumer
- Boilerplate for an Apache Kafka Consumer usingPropulsion.Kafka
(typically consuming from an app produced withdotnet new proProjector -k
). -
periodicIngester
- Boilerplate for a service that regularly walks the content of a source, feeding it into a propulsion projector in order to manage the ingestion process usingPropulsion.Feed.PeriodicSource
-
- AWS CDK Wiring for programmatic IaC deployment of
Propulsion.DynamoStore.Indexer
andPropulsion.DynamoStore.Notifier
- AWS CDK Wiring for programmatic IaC deployment of
The bulk of the remaining templates have a consumer aspect, and hence involve usage of Propulsion
.
The specific behaviors carried out in reaction to incoming events often use `Equinox components
-
proReactor
- Boilerplate for an application that handles reactive actions ranging from publishing notifications via Kafka (simple, or summarising events through to driving follow-on actions implied by events (e.g., updating a denormalized view of an aggregate)Input options are:
- (default)
Propulsion.Cosmos
/Propulsion.DynamoStore
/Propulsion.EventStoreDb
/Propulsion.SqlStreamStore
depending on whether the program is run withcosmos
,dynamo
,es
,sss
arguments --source kafkaEventSpans
: changes source to be Kafka Event Spans, as emitted fromdotnet new proProjector --kafka
The reactive behavior template has the following options:
- Default processing shows importing (in summary form) from an aggregate in
EventStore
or a CosmosDB ChangeFeedProcessor to a Summary form inCosmos
--blank
: remove sample Ingester logic, yielding a minimal projector--kafka
(without--blank
): adds Optional projection to Apache Kafka usingPropulsion.Kafka
(instead of ingesting into a localCosmos
store). Produces versioned Summary Event feed.--kafka --blank
: provides wiring for producing to Kafka, without summary reading logic etc
NOTE At present, checkpoint storage when projecting from EventStore uses Azure CosmosDB - help wanted ;)
- (default)
-
feedSource
- Boilerplate for an ASP.NET Core Web Api serving a feed of items stashed in anEquinox.CosmosStore
. Seedotnet new feedConsumer
for the associated consumption logic -
feedConsumer
- Boilerplate for a service consuming a feed of items served bydotnet new feedSource
usingPropulsion.Feed
-
summaryConsumer
- Boilerplate for an Apache Kafka Consumer usingPropulsion.Kafka
to ingest versioned summaries produced by adotnet new proReactor --kafka
. -
trackingConsumer
- Boilerplate for an Apache Kafka Consumer usingPropulsion.Kafka
to ingest accumulating changes in anEquinox.Cosmos
store idempotently. -
proSync
- Boilerplate for a console app that that syncs events betweenEquinox.Cosmos
andEquinox.EventStore
stores using the relevantPropulsion
.* libraries, filtering/enriching/mapping Events as necessary. -
proArchiver
- Boilerplate for a console app that that syncs Events from relevant Categories from a Hot container and to an associated warmEquinox.Cosmos
stores archival container using the relevantPropulsion
.* libraries.- An Archiver is intended to run continually as an integral part of a production system.
-
proPruner
- Boilerplate for a console app that that inspects Events from relevant Categories in anEquinox.Cosmos
store's Hot container and uses that to drive the removal of (archived) Events that have Expired from the associated Hot Container using the relevantPropulsion
.* libraries.-
While a Pruner does not consume a large amount of RU capacity from either the Hot or Warm Containers, running one continually is definitely optional; a Pruner only has a purpose when there are Expired events in the Hot Container; running periodically during low-load periods may be appropriate, depending on the lifetime profile of the events in your system
-
Reducing the traversal frequency needs to be balanced against the primary goal of deleting from the Hot Container: preventing it splitting into multiple physical Ranges.
-
It is necessary to reset the CFP checkpoint (delete the checkpoint documents, or use a new Consumer Group Name) to trigger a re-traversal if events have expired since the lsat time a traversal took place.
-
-
proIndexer
- Derivative ofproReactor
template. 🙏 @ragiano215-
Specific to CosmosDB, though it would be easy to make it support DynamoDB
-
For applications where the reactions using the same Container, credentials etc as the one being Monitored by the change feed processor (simpler config wiring and less argument processing)
-
includes full wiring for Prometheus metrics emission from the Handler outcomes
-
Demonstrates notion of an
App
project that hosts common wiring common to a set of applications without having the Domain layer reference any of it. -
Implements
sync
andsnapshot
subcommands to enable updating snapshots and/or keeping a cloned database in sync
-
-
eqxShipping
- Example demonstrating the implementation of a Process Manager usingEquinox
that manages the enlistment of a set ofShipment
Aggregate items into a separatedContainer
Aggregate as an atomic operation. 🙏 @Kimserey.- processing is fully idempotent; retries, concurrent or overlapping transactions are intended to be handled thoroughly and correctly
- if any
Shipment
s cannot beReserved
, those that have been getRevoked
, and the failure is reported to the caller - includes a
Watchdog
console app (based ondotnet new proReactor --blank
) responsible for concluding abandoned transaction instances (e.g., where processing is carried out in response to a HTTP request and the Clients fails to retry after a transient failure leaves processing in a non-terminal state). - Does not include wiring for Prometheus metrics (see
proHotel
)
-
proHotel
- Example demonstrating the implementation of a Process Manager usingEquinox
that coordinates the merging of a set ofGuestStay
s in a Hotel as a singleGroupCheckout
activity that coves the payment for each of the stays selected.- illustrates correct idempotent logic such that concurrent group checkouts that are competing to cover the same stay work correctly, even when commands are retried.
- Reactor program is wired to support consuming from
MessageDb
orDynamoDb
. - Unit tests validate correct processing of reactions without the use of projection support mechanisms from the Propulsion library.
- Integration tests establish a Reactor an xUnit.net Collection Fixture (for MessageDb or DynamoDb) or Class Fixtures (for MemoryStore) to enable running scenarios that are reliant on processing that's managed by the Reactor program, without having to run that concurrently.
- Includes wiring for Prometheus metrics.
As dictated by the design of dotnet's templating mechanism, consumption is ultimately via the .NET Core SDK's dotnet new
CLI facility and/or associated facilities in Visual Studio, Rider etc.
To use from the command line, the outline is:
- Install a template locally (use
dotnet new --list
to view your current list) - Use
dotnet new
to expand the template in a given directory
# install the templates into `dotnet new`s list of available templates so it can be picked up by
# `dotnet new`, Rider, Visual Studio etc.
dotnet new -i Equinox.Templates
# --help shows the options including wiring for storage subsystems,
# -t includes an example Domain, Handler, Service and Controller to test from app to storage subsystem
dotnet new eqxweb -t --help
# if you want to see a C# equivalent:
dotnet new eqxwebcs -t
# see readme.md in the generated code for further instructions regarding the TodoBackend the above -t switch above triggers the inclusion of
start readme.md
# ... to add an Ingester that reacts to events, as they are written (via EventStore $all or CosmosDB ChangeFeedProcessor) summarising them and feeding them into a secondary stream
# (equivalent to pairing the Projector and Ingester programs we make below)
md -p ../DirectIngester | Set-Location
dotnet new proReactor
# ... to add a Projector
md -p ../Projector | Set-Location
# (-k emits to Kafka and hence implies having a Consumer)
dotnet new proProjector -k
start README.md
# ... to add a Generic Consumer (proProjector -k emits to Kafka and hence implies having a Consumer)
md -p ../Consumer | Set-Location
dotnet new proConsumer
start README.md
# ... to add an Ingester based on the events that Projector sends to kafka
# (equivalent in function to DirectIngester, above)
md -p ../Ingester | Set-Location
dotnet new proReactor --source kafkaEventSpans
# ... to add a Summary Projector
md -p ../SummaryProducer | Set-Location
dotnet new proReactor --kafka
start README.md
# ... to add a Custom Projector
md -p ../SummaryProducer | Set-Location
dotnet new proReactor --kafka --blank
start README.md
# ... to add a Summary Consumer (ingesting output from `SummaryProducer`)
md -p ../SummaryConsumer | Set-Location
dotnet new summaryConsumer
start README.md
# ... to add a Testbed
md -p ../My.Tools.Testbed | Set-Location
# -e -c # add EventStore and CosmosDb suppport to got with the default support for MemoryStore
dotnet new eqxtestbed -c -e
start README.md
# run for 1 min with 10000 rps against an in-memory store
dotnet run -p Testbed -- run -d 1 -f 10000 memory
# run for 30 mins with 2000 rps against a local EventStore
dotnet run -p Testbed -- run -f 2000 es
# run for two minutes against CosmosDb (see https://github.com/jet/equinox#quickstart) for provisioning instructions
dotnet run -p Testbed -- run -d 2 cosmos
# ... to add a Sync tool
md -p ../My.Tools.Sync | Set-Location
# (-m includes an example of how to upconvert from similar event-sourced representations in an existing store)
dotnet new proSync -m
start README.md
# ... to add a Shipping Domain example containing a Process Manager with a Watchdog Service
md -p ../Shipping | Set-Location
dotnet new eqxShipping
# ... to add a Reactor against a Cosmos container for both listening and writing
md -p ../Indexer | Set-Location
dotnet new proIndexer
# ... to add a Hotel Sample for use with MessageDb or DynamoDb
md -p ../ProHotel | Set-Location
dotnet new proHotel
There's integration tests in the repo that check everything compiles before we merge/release
dotnet build build.proj # build Equinox.Templates package, run tests \/
dotnet pack build.proj # build Equinox.Templates package only
dotnet test build.proj -c Release # Test aphabetically newest file in bin/nupkgs only (-c Release to run full tests)
One can also do it manually:
-
Generate the package (per set of changes you make locally)
a. ensuring the template's base code compiles (see runnable templates concept in
dotnet new
docs)b. packaging into a local nupkg
$ cd ~/dotnet-templates $ dotnet pack build.proj Successfully created package '/Users/me/dotnet-templates/bin/nupkg/Equinox.Templates.3.10.1-alpha.0.1.nupkg'.
-
Test, per variant
(Best to do this in another command prompt in a scratch area)
a. installing the templates into the
dotnet new
local repo$ dotnet new -i /Users/me/dotnet-templates/bin/nupkg/Equinox.Templates.3.10.1-alpha.0.1.nupkg
b. get to an empty scratch area
$ mkdir -p ~/scratch/templs/t1 $ cd ~/scratch/templs/t1
c. test a variant (i.e. per
symbol
in the config)$ dotnet new proReactor -k # an example - in general you only need to test stuff you're actually changing $ dotnet build # test it compiles $ # REPEAT N TIMES FOR COMBINATIONS OF SYMBOLS
-
uninstalling the locally built templates from step 2a:
$ dotnet new -u Equinox.Templates
- ✅ DO define strongly typed ids and a
type Store.Config
innamespace Domain
- ❌ DONT have global
module Types
. AVOID per Aggregatemodule Types
or top leveltype
definitions - ✅ DO group stuff predictably per
module Aggregate
:Stream, Events, Reactions, Fold, Decisions, Service, Factory
. Keep grouping within that. - ❌ DONT
open <Aggregate>
,open <Aggregate>.Events
oropen <Aggregate>.Fold
- ✅ DO design for idempotency everywhere. ❌ DONT return TMI that the world should not be taking a dependency on.
- ❌ DONT use
Result
or a per-Aggregatetype Error
. ✅ DO use minimal result types per decision function - ❌ DONT expose your
Fold.State
outside your Aggregate. - ❌ DONT be a slave to CQRS for all read paths. ✅ DO
AllowStale
🤔 CONSIDERQueryCurrent
- ❌ DONT be a slave to the Command pattern or Mediatr
- ✅ DO maintain common wiring in an
App
project, as perpropulsion-indexer
F# excels at succinctly expressing a high level design for a system; see Designing with types by Scott Wlaschin for many examples.
For an event sourced system, it gets even better: it's not uncommon to be able to, using only a screen or two of types, convey a system's significant events in a manner that's legible for both technical and non-technical stakeholders.
It's important not to take this too far though; ultimately, as a system grows, the need for Events to be grouped into Categories must become the organizing constraint.
That means letting go of something that feels almost perfect...
In some cases, Aggregates have overlapping concerns that can mean soe aspects of Event Contracts are common. It can be very tempting to keep this DRY as shared types in a central place. These benefits must unfortunately be relinquished. Instead:
❌ BAD shared types
// <Types.fs>
module Domain.Types
type EntityContext = { name: string; area: string }
..
// <Aggregate>.fs
module Aggregate
open Domain.Types
module Events =
type Event =
// ❌ BAD defines a contract that can be changed by someone adding or renaming a field in a shared type
| Created of {| creator: UserId; context: EntityContext |}
..
// <Aggregate2>.fs
module Aggregate2
module Events =
type Event =
| Copied of {| by: UserId; context: Types.EntityContext |}
..
Instead, let each module <Aggregate>
maintain its own version of each type that will be used in an event within its module Events
.
The decide
function can map from an input type if desired. The important thing is that the Aggregate will need to be able to roundtrip its types in perpetuity, and having to disentangle the overlaps between types shared across multiple Aggregates is simply never worth it.
While sharing the actual types is a no-no, having common id types, and using those for references across streams is valid.
It's extremely valuable for these to be strongly typed.
module Domain.Types
type UserId = ..
type TenantId = ..
..
module Domain.User
module Events =
type Joined = { tenant: TenantId; authorizedBy: UserId }
Per strongly-typed id type
, having an associated module
with the same name alongside works well.
This enables one to quickly identify and/or navigate the various ways in which such ids are generated/parsed and/or validated.
namespace Domain
type UserId = Guid<userId>
and [<Measure>] userId
module UserId =
let private ofGuid (id: Guid): UserId = %id
let private toGuid (id: UserId): Guid = %id
let parse (input: string): UserId = input |> Guid.Parse |> ofGuid
let toString (x: UserId): string = (toGuid x).ToString "N"
Wherever possible, the templates use use strongly type identifiers, particularly ones that might naturally be represented as primitives, i.e. string
etc.
FSharp.UMX
is useful to transparently pin types in a message contract cheaply - it works well for a number of contexts:
-
Coding/decoding events using FsCodec. (because Events are things that have happened, validating them is not a central concern as we load and fold these incontrovertible Facts)
-
Model binding in ASP.NET; because the types de-sugar to the primitives, no special support is required.
_Unlike events, there are more considerations in play in this context though; often you'll want to apply validation to the inputs (representing Commands) as you map them to Value Objects, Making Illegal States Unrepresentable.
TODO write up the fact that while UMX is a good default, there are nuances wrt string ones
- case sensitive
- works transparently for most serializers, also works for model binding
- how/do you validate nulls/lengths, xss protection, rejecting massive ones
- if you use UMX, when/how will you validate
TODO write up the fact that while UMX is a good default, there are nuances wrt Guid ones
- parsing needs to not be sensitive to case
- rendering with or without dashes/braces - can be messy with configuring json serializers
- not actually part of JSON
- provides some XSS/null protection but is that worth it
TODO write something in more depth
It's correct to say that few systems actually switch databases in real life. Defining a type
that holds only a *StoreContext
and a Cache
can feel like pointless abstraction.
In populsion-hotel
, we have:
[<RequireQualifiedAccess; NoComparison; NoEquality>]
type Config =
| Memory of Equinox.MemoryStore.VolatileStore<struct (int * System.ReadOnlyMemory<byte>)>
| Dynamo of Equinox.DynamoStore.DynamoStoreContext * Equinox.Cache
| Mdb of Equinox.MessageDb.MessageDbContext * Equinox.Cache
Clearly, not many systems are deployed that arbitrarily target MessageDB or DynamoDB
More common is the configuration in: propulsion-cosmos-reactor
:
[<NoComparison; NoEquality; RequireQualifiedAccess>]
type Config =
| Cosmos of Equinox.CosmosStore.CosmosStoreContext * Equinox.Cache
The advantage of still having a type Config
in place is to be able to step in and generalize things.
For instance, when such a system expands from having a single store to also having a separated views store, it can become:
[<NoComparison; NoEquality; RequireQualifiedAccess>]
type Config =
| Cosmos of contexts: CosmosContexts * cache: Equinox.Cache
and [<NoComparison; NoEquality>] CosmosContexts =
{ main: Equinox.CosmosStore.CosmosStoreContext
views: Equinox.CosmosStore.CosmosStoreContext
/// Variant of `main` that's configured such that `module Snapshotter` updates will never trigger a calve
snapshotUpdate: Equinox.CosmosStore.CosmosStoreContext }
💡 This does mean that the Domain
project will need to reference the concrete store packages (i.e., Equinox.CosmosStore
, Equinox.MemoryStore
etc).
:bulb: the wiring that actually establishes the Context
s should be external to the Domain
project in an App
project, as propulsion-indexer
does, and should only be triggered within a Host application's Composition root
There are established conventions documented in Equinox's module Aggregate
overview
Having the Event Contracts, State and Decision logic in a single module can feel wrong when you get over e.g. 1000 lines of code; instincts to split the file on some basis will kick in. Don't do it; splitting the file is hiding complexity under the carpet.
The Event Contracts are the most important contract that an Aggregate has - decision logic will churn endlessly. You might even implement logic against it in other languages. But the Event Contracts you define are permanent. As a developer fresh to a project, the event contracts are often the best starting point as you try to understand what a given aggregate is responsible for.
The State type and the associated evolve
and fold
functions are intimately tied to the Event Contracts. Over time, ugliness and upconversion can lead to noise, and temptation to move it out. Don't do it; being able to understand the full coupling is critical to understanding how things work, and equally critical to being able to change or add functions.
Decision logic bridges between the two worlds of State and Events. The State being held exists only to serve the Decision logic. The only reason for Event Contracts is to record Decisions. Trying to pretend that some of the Decisions are less important and hence should live elsewhere is rarely a good idea. How decisions are made, and how those decisions are encoded as Events should be encapsulated within the Aggregate.
In some cases, it can make sense for a decision function to be a skeleton function that passes out to some helper functions that it's passed to assist in the decision making and/or composing some details that go into the event body.
Sometimes these functions are best passed as arguments to the Service Method that will call the decision function.
In other cases, the relevant helper functions can be passed to the type Service
as arguments when it's being constructed in the Factory
.
The critical bit is that the bits that need to touch the State and/or generate Events should not leave the module Aggregate
, as there is not better place in the system for that to live.
This is akin to the maxim (from the GOOS book of Listen to your Tests: If a given Aggregate has too many responsibilities, that's feedback you should be using to your advantage, not lamenting or ignoring:
- if an aggregate consumes or produces an extraordinary number of event types, maybe there's an axis on which they can be split?
- if there are multiple splittable pieces of state in the overall State, maybe you need two aggregates over the same stream? Or two sibling categories that share an id?
- should some of the logic and/or events be part of an adjacent aggregate? (why should a Cart have Checkout flow elements in it?)
- if there are many decision functions, is that a sign that there's a missing workflow or process manager that should be delegating some (cohesive) responsibolities to this aggregate?
- if a decision function is 300 lines, but only 5 lines touch the state and only 4 lines produce an event, can you extract just that logic to a single boring module that can be unit tested independent of how the State and Events are ultimately maintained?
Having the Event Contracts be their own module
is a critical forcing function for good aggregate design. Having all types and all cases live in one place and being able to quickly determine where each Event is produced is key to being able to understand the moving parts of a system.
When modelling, it's common to include primary identifiers (e.g. a user id), or contextual identifiers (e.g. a tenant id) in an Event in order to convey the relationships between events in the systems as a whole; you want the correlations to stand out. In the implementation however, repeating the identity information in every event is a major liability:
- the State needs to contain the values - that's more noise
- Event versioning gets messier - imagine extending a system to make it multi-tenant, you'd need to be able to handle all the historic events that predated the concept
The alternative is for a workflow to react to the events in the context of a stream - if some logic needs to know the userid let the User
reactor handling the User
event on a User
Stream pass that context forward if relevant in that context.
Having to prefix types and/or Event Type names with Events.
is a feature, not a bug.
✅ DO encapsulate inferences from events and Stream
names in a module Reactions
facade
module Stream
should be always be private
.
Any classification of events, parsing of stream names, should be via helpers within the module Reactions
, e.g.:
// ❌ BAD Stream module is `public`
module Stream =
let [<Literal>] Category = "tenant"
// ❌ BAD
module TenantNotifications
let categories = [ Tenant.Stream.Category]
let handle (stream, events) = async {
if StreamName.category stream = Tenant.Stream.Category then
let tenantId = FsCodec.StreamName.Split stream |> snd |> TenantId.parse
// ❌ BAD
module Tenant.Tests
let [<Fact>] ``generated correct events` () =
let id = TenantId.generate()
// ❌ BAD boilerplate, referencing multipple modules
let streamName = FsCodec.StreamName.create Tenant.Stream.Category id
Instead, keep the module Streams
private:
module private Stream =
let [<Literal>] Category = "tenant"
let id (id: TenantId) = FsCodec.StreamId.gen TenantId.toString id
let decodeId = FsCodec.StreamId.dec TenantId.parse
let name = id >> FsCodec.StreamName.create Category
let tryDecode = FsCodec.StreamName.tryFind Category >> ValueOption.map decodeId
selectively expose a relevant interface via a module Reactions
facade:
// ✅ GOOD expose all reactions and test integration helpers via a Reactions facade
module Reactions =
// ✅ GOOD - F12 can show us all reaction logic
let categoryName = Stream.Category
// ✅ GOOD - if a unit test needs to generate a stream name, it can supply the tenant id
let streamName = Stream.name
let [<return: Struct>] (|For|_|) = Stream.tryDecode
// ✅ OK generic decoding function (but next ones are better...)
let dec = Streams.Codec.dec<Events.Event>
let [<return: Struct>] (|Decode|_|) = function
| struct (For id, _) & Streams.Decode dec events -> ValueSome struct (id, events)
| _ -> ValueNone
let deletionNamePrefix tenantIdStr = $"%s{Stream.Category}-%s{tenantIdStr}"
in some cases, the filtering and/or classification functions can be more than just simple forwarding functions:
// ✅ GOOD - better than sprinkling `nameof(Aggregate..Events.Completed)` in an adjacent `module`
/// Used by the Watchdog to infer whether a given event signifies that the processing has reached a terminal state
let isTerminalEvent (encoded: FsCodec.ITimelineEvent<_>) =
encoded.EventType = nameof(Events.Completed)
let private impliesStateChange = function Events.Snapshotted _ -> false | _ -> true
// ✅ BETTER specific pattern that extracts relevant items, keeping it close to the Event definitiosn
let (|ImpliesStateChange|NoStateChange|NotApplicable|) = function
| Parse (tenantId, events) ->
if events |> Array.exists impliesStateChange then ImpliesStateChange (tenantId, events.Length)
else NoStateChange events.Length
| _, events -> NotApplicable events.Length
Ultimately, the consumption logic becomes clearer, and is less intimately intertwined with the implementation:
// ✅ GOOD
module TenantNotifications
let categories = [ Tenant.Reactions.categoryName ]
let handle (stream, events) = async {
match stream, events with
| Tenant.Reactions.Decode (tenantId, events) ->
// ...
or:
// ✅ BETTER - intention revealing names, classification encapsulated close to the events
module TenantNotifications
let categories = [ Tenant.Reactions.categoryName ]
let handle (stream, events) = async {
match struct (stream, events) with
| Todo.Reactions.ImpliesStateChange (clientId, eventCount) ->
let! version', summary = service.QueryWithVersion(clientId, Contract.ofState)
let wrapped = generate stream version' (Contract.Summary summary)
let! _ = produceSummary wrapped
return Propulsion.Sinks.StreamResult.OverrideNextIndex version', Outcome.Ok (1, eventCount - 1)
| Todo.Reactions.NoStateChange eventCount ->
return Propulsion.Sinks.StreamResult.AllProcessed, Outcome.Skipped eventCount
| Todo.Reactions.NotApplicable eventCount ->
return Propulsion.Sinks.StreamResult.AllProcessed, Outcome.NotApplicable eventCount }
The helpers can make tests terser, and make it easier to :
// ✅ BETTER - intention revealing names, classification encapslated close to the events
module Tenant.Tests
let [<Fact>] ``generated correct events` () =
let id = TenantId.generate()
let streamName = Tenant.Reactions.streamName id
If your Fold
logic is anything but incredibly boring, that's a design smell.
If you must, unit test it to satisfy yourself things can't go wrong, but logging is never the answer.
Fold logic should not be deciding anything - just summarizing facts.
If anything needs to be massaged prior to making a decision, do that explicitly; don't pollute the Fold
logic.
In general, you want to make illegal States unrepresentable.
See Events: AVOID including egregious identity information.
Railway Oriented programming is a fantastic thinking tool. Designing with types is an excellent implementation strategy. Domain Modelling Made Functional is a must read book. But it's critical to also consider the other side of the coin to avoid a lot of mess:
- Against Railway Oriented Programming by Scott Wlaschin. Scott absolutely understands the tradeoffs, but it's easy to forget them when reading the series
- you're better off using Exceptions by Eirik Tsarpalis.
Each Decision function should have as specific a result contract as possible. In order of preference:
unit
: A function that idempotently maps the intent or request to internal Events based solely on the State is the ideal. Telling the world about what you did is not better. Logging what it did is not better than being able to trust it to do it's job. Unit tests should assert based on the produced Events as much as possible rather than relying on a return value.throw
: if something can go wrong, but it's not an anticipated first class part of the workflow, there's no point returning anError
result; you're better off using Exceptions.bool
: in some cases, an external system may need to know whether something is permitted or necessary. If that's all that's needed, don't return identifiers or messages give away extra information- simple discriminated union: the next step after a
true
/false
is to make a simple discriminated union - you get a chance to name it, and the cases involved. - record, anonymous record, tuple: returning multiple items is normally best accomplished via a named record type.
- the caller gets to use a clear name per field
- how it's encoded in the State type can vary over time without consumption sites needing to be revisited
- extra fields can be added later, without each layer through which the response travels needing to be adjusted
- the caller gets to pin the exact result type via a type annotation (either in the
Service
'smember
return type, or at the call site) - this is not possible if it's an anonymous record :bulb: in some cases it a tuple can be a better encoding if it's important that each call site explicitly consume each part of the result
string
: A string can be anything in any language. It can benull
. It should not be used to convey a decision outcome.Result
: A result can be a success or a failure. both sides are generic. Its the very definition of a lowest common denominator.- if it's required in a response transmission, map it out there; don't make the implementation logic messier and harder to test in order to facilitate that need.
- if it's because you want to convey some extra information that the event cannot convey, use a tuple, a record or a Discriminated Union
It's always sufficient to return a bool
or enum
to convey an outcome (but try to avoid even that). See also Fold: DONT log
Combining success and failures into one type because something will need to know suggests that there is a workflow. It's better to model that explicitly.
If your API has a common set of result codes that it can return, map to those later - the job here is to model the decisions.
See use the simplest result possible.
A corollary of designing for idempotency is that we don't want to have the caller care about whether a request triggered a change. If we need to test that, we can call the decision function and simply assert against the events it produced.
// ❌ DONT DO THIS!
module Decisions =
let create name state =
if state <> Initial then AlreadyCreated, [||]
else Ok, [| Created { name = name } |]
The world does not need to know that you correctly handled at least once delivery of a request that was retried when the wifi reconnected.
Instead:
let create name = function
| Fold.Initial -> [| Events.Created { name = name } |]
| Fold.Running _ -> [||]
...
module ThingTests
let [<Fact>] ``create generates Created`` () =
let state = Fold.Initial
let events = Decisions.create "tim" state
events =! [| Events.Created { name = "tim" } |]
let [<Fact>] ``create is idempotent`` () =
let state = Fold.Running ()
let events = Decisions.create "tim" state
events =! [||]
If you have three outcomes for one decision, don't borrow that result type for a separate decision that only needs two. Just give it it's own type. See use the simplest result possible.
Most systems will have a significant number of Aggregates with low numbers of Events and Decisions. Having the Decision functions at the top level of the Aggregate Module can work well for those. Many people like to group such logic within a module Decisions
, as it gives a good outline (module Stream
, module Events
, module Reactions
, module Fold
, type Service
, module Factory
) that allows one to quickly locate relevant artifacts and orient oneself in a less familiar area of the code. A key part of managing the complexity is to start looking for ways to group them into clumps of 3-10 related decision functions in a module
within the overall module Decisions
(or at top level in the file) as early as possible.
The bulk of introductory material on the Decider pattern, and event sourcing in general uses the Command pattern as if it's a central part of the architecture. That's not unreasonable; it's a proven pattern that's useful in a variety of contexts.
Some positives of the pattern are:
- one can route any number of commands through any number of layers without having to change anything to add a new command
- it can be enable applying cross-cutting logic uniformly
- when implemented as Discriminated Unions in F#, the code can be very terse, and you can lean on total matching etc.
- In some cases it can work well with property based testing; the entirety of an Aggregate's Command Handling can be covered via Property Based Testing etc
However, it's also just a pattern. It has negatives; some:
-
if you have a single command handler, the result type is forced to be a lowest common denominator
-
the code can actually end up longer and harder to read, but still anaemic in terms of modelling the domain properly
module Decisions = type Command = Increment | Decrement let decide command state = match command with | Increment by -> if state = 10 then [||] else [| Events.Incremented |] | Decrement -> if state = 0 then [|] else [| Events.Decremented |] | Reset -> if state = 0 then [||] else [| Events.Reset |] type Service(resolve: ...) = member _.Execute(id, c) = let decider = resolve id decider.Transact(Decisions.decide c) type App(service: Service, otherService: ...) = member _.Execute(id, cmd) = if otherService.Handle(id, cmd) then service.Execute(id, cmd) type Controller(app: App) = member _.Reset(id) = app.Execute(id, Aggregate.Command.Reset)
If you instead use methods with argument lists to convey the same information, there's more opportunity to let the intention be conveyed in the code.
module Decisions = let increment state = [| if state < 10 then Events.Incremented |] let reset _state = [| if state <> 0 then Events.Reset |] type Service(resolve: ...) = member _.Reset id = let decider = resolve id decider.Transact Decisions.reset member _.Increment(id, ?by) = let decider = resolve id decider.Transact Decisions.increment type App(service: Service, otherService: ...) = member _.HandleFrob(id) = if otherService.AttemptFrob() then service.Increment(id) member _.Reset(id) = service.Reset(id) type Controller(app: App) = member _.Frob() = app.HandleFrob id member _.Reset() = app.Reset id
The primary purpose of an Aggregate is to gather State and produce Events to facilitate making and recording of Decisions. There is no Law Of Event Sourcing that says you must at all times use CQRS to split all reads out to some secondary derived read model.
In fact, in the the context of Equinox, the AccessStrategy.RollingState
, LoadOption.AllowStale
and LoadOption.AnyCachedState
features each encourage borrowing the Decision State to facilitate rendering that state to users of the system directly.
However, making pragmatic choices can also become unfettered hacking very quickly. As such the following apply.
Unless there is a single obvious boring rendition for a boring aggregate, you should have a type per Queyr
As with the guidance on not using Lowest Common Denominator representations for results, you want to avoid directly exposing the State
The purpose of the Fold State is to facilitate making decisions correctly. It often has other concerns such as:
- being able to store and reload from a snapshot
- being able to validate inferences being made based on events are being made correctly in the context of tests
Having it also be a read model DTO is a bridge too far:
// ❌ DONT DO THIS!
member service.Read(tenantId) =
let decider = resolve tenantId
decider.Query(fun state -> state)
LoadOption.AllowStale
is the preferred default strategy for all queries. This is for two reasons:
- if a cached version of the state fresher than the
maxAge
tolerance is available, you produce a result immediately and your store does less work - even if a sufficiently fresh state is not available, all such reads are coalesced into a single store roundtrip. This means that the impact of read traffic on the workload hitting the store itself is limited to one read round trip per
maxAge
interval.
module Queries =
let infoCachingPeriod = TimeSpan.FromSeconds 10.
type NameInfo = { name: string; contact: ContactInfo }
let renderName (state: Fold.State) = { name = state.originalName; contact = state.contactDetails }
let renderPendingApprovals (state: Fold.State) = Fold.calculatePendingApprovals state
type Service(resolve: ...)
// NOTE: Query should remain private; expose each relevant projection as a `Read*` method
member private service.Query(maxAge: TimeSpan, tenantId, render: Fold.State -> 'r): Async<'r> =
let decider = resolve tenantId
decider.Query(render, load = Equinox.LoadOption.AllowStale maxAge)
member service.ReadCachedName(tenantId): Async<Queries.NameInfo> =
service.Query(Queries.infoCachingPeriod, Queries.renderName)
member service.ReadPending(tenantId): Async<int> =
service.Query(Queries.infoCachingPeriod, Queries.renderPendingApprovals)
While the ReadCached*
pattern above is preferred, as it protect the store from unconstrained read traffic, there are cases where it's deemed necessary to be able to Read Your Writes 'as much as possible' at all costs.
TL;DR quite often you should really be doing the ReadCached
pattern
The first thing to note is that you need to be sure you're actually meeting that requirement. For instance, if you are using EventStoreDB, DynamoDB or MessageDB, you will want to use Equinox.LoadOption.RequireLeader
for it to be meaningful (otherwise a read, (yes, even one served from the same application instance) might be read from a replica that has yet to see the latest state). For CosmosDB in Session
consistency mode, similar concerns apply.
It's also important to consider the fact that any read, no matter how consistent it is at the point of reading, is also instantly stale data the instant it's been performed.
AllowStale
mode is less prone to this issue, as store read round trips are limited to one per maxAge
interval.
QueryRaw
should stay private
- you want to avoid having read logic spread across your application doing arbitrary reads that are not appropriately encapsulated within the Aggregate.
// NOTE: the QueryRaw helper absolutely needs to stay private. Expose queries only as specific `QueryCurrent*` methods
member private service.QueryRaw(tenantId, render) =
let decider = resolve tenantId
decider.Query(render, Equinox.LoadOption.RequireLeader)
member service.QueryCurrentState(tenantId) =
service.QueryRaw(Queries.renderState)
Ideally use the full name. If you can't help it, use module
aliases as outlined below instead. If you are opening it because you also need to touch the Fold State, don't do that either.
Exception: for the unit tests associated with a single Aggregate, open Aggregate
may make sense. As long as it's exactly that one Aggregate
If you have logic in another module that is coupled to an event contract, you want that to stick out.
- If the module is concerned with exactly one Aggregate, you can alias it via:
module Events = Aggregate.Events
- If the module is concerned with more than one Aggregate and there are less than 10 usages, prefix the consumption with
Aggregate.Events.
- If the module is concerned with more than one Aggregate and there are many usages, or the name is long, alias it via
module AggEvents = AggregateWithLongName.Events.
Exception: In some cases, an open Events
, inside module Fold
might be reasonable:
module Events =
...
module Fold =
open Events
let evolve state = function
| Increment -> state + 1
| Decrement -> state - 1
BUT, how much worse is it to have to read or type:
module Events =
...
module Fold =
let evolve state = function
| Events.Increment -> state + 1
| Events.Decrement -> state - 1
Within module Decisions
, it's normally best not to open it. i.e. whenever producing Events, simply prefix it:
module Events =
...
module Decisions =
module Counting =
let increment state = [| if state < 10 then Events.Incremented |]
If you have external logic that is coupled to the State of an Aggregate and/or the related types, be explicit about that coupling; refer to Aggregate.Fold.State
to make it clear. Or use the ReadCached*
or QueryCurrent*
patterns, which by definition return a specific type that is not the full State
(and is not in the Fold
namespace/module).
All the templates herein attempt to adhere to a consistent structure for the composition root module
(the one containing an Application’s main
), consisting of the following common elements:
Responsible for: Loading secrets and custom configuration, supplying defaults when environment variables are not set
Wiring up retrieval of configuration values is the most environment-dependent aspect of the wiring up of an application's interaction with its environment and/or data storage mechanisms. This is particularly relevant where there is variance between local (development time), testing and production deployments. For this reason, the retrieval of values from configuration stores or key vaults is not managed directly within the module Args
section
The Configuration
type is responsible for encapsulating all bindings to Configuration or Secret stores (Vaults) in order that this does not have to be complected with the argument parsing or defaulting in module Args
- DO (sparingly) rely on inputs from the command line to drive the lookup process
- DONT log values (
module Args
’sArguments
wrappers should do that as applicable as part of the wireup process) - DONT perform redundant work to load values if they’ve already been supplied via Environment Variables
Responsible for: mapping Environment Variables and the Command Line argv
to an Arguments
model
module Args
fulfils three roles:
- uses Argu to map the inputs passed via
argv
to values per argument, providing good error and/or help messages in the case of invalid inputs - responsible for managing all defaulting of input values including echoing them to console such that an operator can infer the arguments in force without having to go look up defaults in a source control repo
- expose an object model that the
build
orstart
functions can use to succinctly wire up the dependencies without needing to touchArgu
,Configuration
, or any concrete Configuration or Secrets storage mechanisms
- DO take values via Argu or Environment Variables
- DO log the values being applied, especially where defaulting is in play
- DONT log secrets
- DONT mix in any application or settings specific logic (no retrieval of values, don’t make people read the boilerplate to see if this app has custom secrets retrieval)
- DONT invest time changing the layout; leaving it consistent makes it easier for others to scan
- DONT be tempted to merge blocks of variables into a coupled monster - the intention is to (to the maximum extent possible) group arguments into clusters of 5-7 related items
- DONT reorder types - it'll just make it harder if you ever want to remix and/or compare and contrast across a set of programs
NOTE: there's a medium term plan to submit a PR to Argu extending it to be able to fall back to environment variables where a value is not supplied, by means of declarative attributes on the Argument specification in the DU, including having the --help
message automatically include a reference to the name of the environment variable that one can supply the value through
Responsible for applying logging config and setting up loggers for the application
- DO allow overriding of log level via a command line argument and/or environment variable (by passing
Args.Arguments
or values from it)
type Logging() =
[<Extension>]
static member Configure(configuration : LoggingConfiguration, ?verbose) =
configuration
.Enrich.FromLogContext()
|> fun c -> if verbose = Some true then c.MinimumLevel.Debug() else c
// etc.
The start
function contains the specific wireup relevant to the infrastructure requirements of the microservice - it's the sole aspect that is not expected to adhere to a standard layout as prescribed in this section.
let start (args : Args.Arguments) =
…
(yields a started application loop)
The run
function formalizes the overall pattern. It is responsible for:
- Managing the correct sequencing of the startup procedure, weaving together the above elements
- managing the emission of startup or abnormal termination messages to the console
- DONT alter the canonical form - the processing is in this exact order for a multitude of reasons
- DONT have any application specific wire within
run
- any such logic should live within thestart
and/orbuild
functions - DONT return an
int
fromrun
; letmain
define the exit codes in one place
let run args = async {
use consumer = start args
return! consumer.AwaitWithStopOnCancellation()
}
[<EntryPoint>]
let main argv =
try let args = Args.parse EnvVar.tryGet argv
try Log.Logger <- LoggerConfiguration().Configure(verbose=args.Verbose).CreateLogger()
try run args |> Async.RunSynchronously; 0
with e when not (e :? System.Threading.Tasks.TaskCanceledException) -> Log.Fatal(e, "Exiting"); 2
finally Log.CloseAndFlush()
with :? Argu.ArguParseException as e -> eprintfn "%s" e.Message; 1
| e -> eprintf "Exception %s" e.Message; 1
Please don't hesitate to create a GitHub issue for any questions, so others can benefit from the discussion. For any significant planned changes or additions, please err on the side of reaching out early so we can align expectations - there's nothing more frustrating than having your hard work not yielding a mutually agreeable result ;)
See the Equinox repo's CONTRIBUTING section for general guidelines wrt how contributions are considered specifically wrt Equinox.
The following sorts of things are top of the list for the templates:
- Fixes for typos, adding of info to the readme or comments in the emitted code etc
- Small-scale cleanup or clarifications of the emitted code
- support for additional languages in the templates
- further straightforward starter projects
While there is no rigid or defined limit to what makes sense to add, it should be borne in mind that dotnet new eqx/pro*
is sometimes going to be a new user's first interaction with Equinox and/or [asp]dotnetcore. Hence there's a delicate (and intrinsically subjective) balance to be struck between:
- simplicity of programming techniques used / beginner friendliness
- brevity of the generated code
- encouraging good design practices
In other words, there's lots of subtlety to what should and shouldn't go into a template - so discussing changes before investing time is encouraged; agreed changes will generally be rolled out across the repo.