Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
DDD
It is our job, as software engineers, to effectively transfer real-world business problems and solutions into the digital world. This is not an easy task. You have to understand how the business operates and what are its key advantages over the competitors. Exploring models in collaboration with domain experts and software experts is essential. To be successful you need to uncover and concentrate on how the business generates profit. We call this area the Core Domain.
There are different supporting pieces as part of the overall puzzle which enhance the Core Domain. We call those Supporting Domains. You need to pay special attention to the details because sometimes a so-called "very important part of the business" could be something generic and commonly used in other applications - a.k.a Generic Domain. For such cases, it is recommended to use an off-the-shelf product or leverage an open-source project and concentrate on the Core and Supporting domains. Otherwise, the software engineers will build something which does not bring value nor a competitive advantage to the client.
With Domain-Driven Design the focus is on the core complexity of the domain!
Write software that models business processes explicitly!
Domain-Driven Design is a concept that helps to reduce the overall complexity of a domain. Software solutions built using the Domain-Driven Design approach, tend to reflect the reality closer and enable the business to grow and expand faster in a reliable and predictable way.
With Domain-Driven Design you are delivering business value, not just an ordinary software.
There are different opinions about the situations a DDD is appropriate and when you should use it.
There are several factors that make Domain-Driven Design shine. If you have everything in place you should expect good, if not great, results:
you are building software for an established and already operating business
you have access to domain experts
you are passionate about what you are doing
It is recommended to use other techniques, different from Domain Driven Design if:
you are building a software solution for a totally new business. The argument behind this is that Domain-Driven Design relies on the experience and the know-how of the domain experts. It is very likely that you will invest a great amount of time designing models which are not useful. This for sure will lead you to a failure.
you are building a concept/MVP solution for a new feature. It is better to do it the "quick-and-dirty" way so you could discover if the business concept will be successful or not. This will give you a deeper understanding of the problems you are trying to solve.
you are building a software solution for a totally new business
you are building a concept/MVP solution
Find a domain expert!
Strategic design
Tactical design
Fail fast
Anticipate the steep learning curve
Cronus is a lightweight framework for building event driven systems with DDD/CQRS/ES in mind
Cronus is an open-source framework that helps you solve business problems with less time on infrastructure concerns. Applications built with Cronus are centred around three core concepts:
While many applications can be built using Cronus, it has proven very effective for microservices architectures. Cronus provides an innovative and powerful way of sensibly evolving to event-driven microservices within a microservice architecture.
Cronus has its roots way back in 2012 when a couple of passionate software engineers started building the infrastructure for a software project using DDD/CQRS/ES. The results were quite impressive and the team started growing. Everybody was sharing the same vision and passion for going beyond the boundaries of their skills. The amazing part of this story is that this hasn't changed. Cronus is a reflection of our philosophy for building great software solutions.
ES
Event Sourcing is a foundational concept in the Cronus framework, emphasizing the storage of state changes as a sequence of events. This approach ensures that every modification to an application's state is captured and stored, facilitating a comprehensive history of state transitions.
Immutable Events: Each event represents a discrete change in the system and is immutable, ensuring a reliable audit trail.
Event Storage: Events are persistently stored, allowing the system to reconstruct the current state by replaying these events.
State Reconstruction: The current state of an entity is derived by sequentially applying all relevant events, ensuring consistency and traceability.
Auditability: Maintains a complete history of all changes, facilitating debugging and compliance.
Scalability: Efficiently handles high-throughput systems by focusing on event storage and processing.
Flexibility: Supports complex business logic and workflows by modeling state changes as events.
Define Events: Create events that represent meaningful changes in the domain. In Cronus, events are immutable and should be named in the past tense to reflect actions that have occurred. ELDERS CRONUS
Persist Events: Store events in the event store, which serves as the single source of truth for the system's state. ELDERS OSS
Rehydrate State: Reconstruct the current state of aggregates by replaying the sequence of events associated with them.
By adhering to these principles, Cronus enables developers to build robust, event-driven systems that are both scalable and maintainable.
To help you get started quickly on the Cronus we will build an application that will satisfy all future business requirements.
We need a new task management system.
We need data to be consistent.
We need to be able to reassign tasks inside the user group.
We need an accurate progress report for every user.
Groups progress report needs to be secured such that only group members can access it.
We need a notification to the group members when a user finishes his task.
We need a screen to view the historical changes in user activity.
When users close their accounts we need to ask them why (optional survey).
We need to generate a monthly report that indicates why lost users closed their accounts.
The Cronus framework supports multitenancy, enabling a single application instance to serve multiple tenants while ensuring data isolation and security for each. This design allows for efficient resource utilization and simplified maintenance across diverse client bases.
Tenant Isolation: Each tenant's data and configurations are isolated, preventing unauthorized access and ensuring privacy.
Dynamic Tenant Management: Cronus allows for the addition or removal of tenants at runtime, facilitating scalability and adaptability to changing business needs.
Shared Infrastructure: While tenants share the same application infrastructure, their data and processes remain segregated, optimizing resource usage without compromising security.
Tenant Identification: Assign a unique identifier to each tenant to distinguish their data and operations within the system.
Data Segregation: Utilize strategies such as separate databases, schemas, or tables with tenant-specific identifiers to ensure data isolation.
Configuration Management: Maintain tenant-specific configurations to cater to individual requirements and preferences.
Access Control: Implement robust authentication and authorization mechanisms to enforce tenant boundaries and prevent cross-tenant data access.
Consistent Tenant Context: Ensure that the tenant context is consistently applied throughout the application to maintain data integrity and security.
Scalability Planning: Design the system to handle varying numbers of tenants, considering factors like data volume, performance, and resource allocation.
Monitoring and Auditing: Implement monitoring and auditing tools to track tenant-specific activities, aiding in compliance and troubleshooting.
By adhering to these practices, developers can leverage Cronus's multitenancy capabilities to build scalable, secure, and efficient applications that serve multiple clients effectively.
First, we need to add a UserId and TaskId to have the of these two entities
Then we need to create a Cronus for task creation and an that will indicate that the event has occurred.
Apply method will pass the event to the state of an aggregate and change its state.
We register a handler by inheriting from ICommandHandler<>.
When the command arrives we read the state of the aggregate, and if it is not found we create a new one and call SaveAsync
to save its state to the database.
Now we need a controller to publish our commands and create tasks.
Here we create TaskId and UserId and inject_IPublisher<CreateTask>
_to publish the command. After this, the command will be sent to RabbitMq and then handled in Application Service.
Let's take an Id from the response and encode it to Base64.
Than try: select * from taskmanagerevents where id = 'dXJuOnRlbmFudDp0YXNrOmU1MjA1NTA3LWYyNmUtNGExMy05OTU4LTNjMzVlYzAwY2I1Yw=='
Imaginary example:
Imagine that you have to build an online store. Until now, the business has been operating locally in a big city and the business has been very successful. The idea is to make it possible for other people outside of the big city to have the same experience which will allow the business to expand and reach a wider customer audience. There are a few questions you have to ask the business or discover somehow from the domain experts.
Q: What are the key advantages over the direct competition?
A: We offer unique loyalty programs which enable good discounts to customers. In addition, we have a rich network of suppliers that gives a wide variety of goods to choose from.
Q: How the online store is going to generate profit?
A: Unlocking the loyalty program requires a paid monthly subscription.
Aggregates represent the business models explicitly. They are designed to fully match any needed requirements. Any change done to an instance of an aggregate goes through the aggregate root.
Creating an aggregate root with Cronus is as simple as writing a class that inheritsAggregateRoot<TState>
and a class for the state of the aggregate root. To publish an event from an aggregate root use the Apply(IEvent @event)
method provided by the base class.
The aggregate root state keeps the current data of the aggregate root and is responsible for changing it based on events raised only by the root.
Use the abstract helper class AggregateRootState<TAggregateRoot, TAggregateRootId>
to create an aggregate root state. It can be accessed in the aggregate root using the state
field provided by the base class. Also, you can implement the IAggregateRootState
interface by yourself in case inheritance is not a viable option.
To change the state of an aggregate root, create event-handler methods for each event with a method signature public void When(Event e) { ... }
.
Another option is to use the AggregateRootId<T>
class. This will give you more flexibility in constructing instances of the id. Also, parsing URNs will return the specified type T
instead of AggregateUrn
.
Add that inherits with the generic .
Finally, we can create an for command handling.
Now let's start our Service and API. We should be able to make post requests to our Controller throw the Swagger and create our first Task in the system. It must be persisted in the .
Download or any other UI tool for Cassandra.
You could read more about the state pattern and .
All aggregate root ids must implement the IAggregateRootId
interface. Since Cronus uses for ids that will require implementing the as well. If you don't want to do that, you can use the provided helper base class AggregateRootId
.
TODO: describe all different types of ids Cronus provides, their purpose and hierarchy. Explain how and why to define custom ids (simple and composite) for aggregates, entities and projections. Explain URNs and the different parsing methods.
An entity is an object that has an identity and is mutable. Each entity is uniquely identified by an ID rather than by its properties; therefore, two entities can be considered equal if both of them have the same ID even though they have different properties.
You can define an entity with Cronus using the Entity<TAggregateRoot, TEntityState>
base class. To publish an event from an entity, use the Apply(IEvent @event)
method provided by the base class.
Set the initial state of the entity using the constructor. The event responsible for creating the entity is being published by the root/parent to modify its state. That means that you can not (and should not) subscribe to that event in the entity state using When(Event e)
.
The entity state keeps current data of the entity and is responsible for changing it based on events raised only by the same entity.
Use the abstract helper class EntityState<TEntityId>
to create an entity state. It can be accessed in the entity using the state
field provided by the base class. Also, you can implement the IEntityState
interface by yourself in case inheritance is not a viable option.
To change the state of an entity, create event-handler methods for each event with a method signature public void When(Event e) { ... }
.
All entity ids must implement the IEntityId
interface. Since Cronus uses URNs for ids that will require implementing the URN specification as well. If you don't want to do that, you can use the provided helper base class EntityId<TAggregateRootId>
.
Data migration is the process of moving data from one system to another. And there are many reasons why a system may require such a move. To name most common ones:
Natural system evolution which requires the data to be optimized for performance or maintainability.
Legal issues where some parts of the data have to be deleted or encrypted
Bad data created by a bug in the system
Business reason. When businesses merge or split.
It is important that the business value of the data is not changed during the process.
There are many different strategies when and how to do data migration. You must carefully plan and execute because damages could be significant.
Depending on the data volume the migration process could take hours, even days. During that time there are many things which could fail and corrupt the data in a irreversible way. To avoid such scenarios you should always migrate the data into a new storage repository.
Always migrate the data into a new storage repository.
Make sure the migration process does not overwhelm the live system. You should be in control when the data is being migrated so you could pause the migration during peek times of the live system. To achieve this, use a separate process to run data migration. Always keep in mind that migrating data takes from your system resources and you must account for that.
Use a separate process to run data migration.
When you are migrating a
Create a separate process which migrates the existing data into the new data repository
Live system must push any new data to the migration service. Could be easily achieved by sending it to a message broker.
Value objects represent immutable and atomic data. They are distinguishable only by the state of their properties and do not have an identity or any identity tracking mechanism. Two value objects with the exact same properties can be considered equal. You can read more about value objects in this article.
To define a value object with Cronus, create a class that inherits the base helper class ValueObject<T>
. Keep all related to the value object business rules and data within the class.
The base class ValueObject<T>
implements the IEqualityComparer<T>
and IEquatable<T>
interfaces. When comparing two value objects of the same type the properties from the first are being compared with the properties of the second using reflection. The base class also overrides the ==
and !=
operators.
If a value object contains a collection of items, make sure that the items are also value objects and the collection supports item-by-item comparison. Otherwise, you will have to override the default comparison algorithm.
Keep a parameterless constructor and specify a data contract for serialization.
Projections are queryable models used for the reading part of our application. We can design projections in such a way that we can manage what data we want to store and by what will be searched. Events are the basis for projections data.
For using projections we should update the configuration file for both API and Service.
And add some dependencies.
You can choose whitch implementation to use. You can get hte tasks(commented in the controller) with same name, or all tasks.
Every time the event will occur it will be handled and persist in its state.
Inject IProjectionReader
that will be responsible for getting the projection state by Id on which projection was subscribed before: Subscribe<TaskCreated>(x => x.UserId).
(The dashboard is not requerd)
If we hit this controller immediately after the first start, it could lead to a probable read error. We need to give it some time to initialize our new projection store and build new versions of the projections. For an empty event store, it could take less than a few seconds but in order not to wait for this and verify that all working properly, we will check it manually.
Cronus Dashboard is a UI management tool for the Cronus framework. It hosts inside our Application so add this missing code to our background service.
Start our Cronus Service and API.
In the dashboard select the Connections
tab and click New Connection
.
Set the predefined port for the Cronus endpoint: http://localhost:7477 and specify your connection name. Click Check
and then Add Connection
.
After you add a connection select it from the drop-down menu and navigate to the Projections tab.
You would be able to see all projections in the system.
Now we would be able to request a controller with userId
. GetAsync
method of IProjectionReader
will restore all events related to projection and apply them to the state.
CQRS
A command is a simple immutable object that is sent to the domain to trigger a state change. There should be a single command handler for each command. It is recommended to use imperative verbs when naming commands together with the name of the aggregate they operate on.
It is possible for a command to get rejected if the data it holds is incorrect or inconsistent with the current state of the aggregate.
You can/should/must...
a command must be immutable
a command should clearly state a business intent with a name in the imperative form
a command can be rejected due to domain validation, error or other reason
a command must update only one aggregate
You can define a command with Cronus using the ICommand
markup interface. All commands get serialized and deserialized, that's why you need to keep the parameterless constructor and specify data contracts.
Cronus uses the ToString()
method for logging, so you can override it to generate user-readable logs. Otherwise, the name of the command class will be used for log messages.
To publish a command, inject an instance ofIPublisher<ICommand>
into your code and invoke the Publish()
method passing the command. This method will return true
if the command has been published successfully through the configured transport. You can also use one of the overrides of the Publish()
method to delay or schedule a command.
This is a handler where commands are received and delivered to the addressed aggregate. Such a handler is called an application service. This is the "write" side in .
An application service is a command handler for a specific aggregate. One aggregate has one application service whose purpose is to orchestrate how commands will be fulfilled. Its the application service's responsibility to invoke the appropriate aggregate methods and pass the command's payload. It mediates between Domain and infrastructure and it shields any domain model from the "outside". Only the application service interacts with the domain model.
You can create an application service with Cronus by using the AggregateRootApplicationService
base class. Specifying which commands the application service can handle is done using the ICommandHandler<T>
interface.
AggregateRootApplicationService
provides a property of type IAggregateRepository
that you can use to load and save the aggregate state. There is also a helper method Update(IAggregateRootId id, Action update)
that loads and aggregate based on the provided id invokes the action and saves the new state if there are any changes.
You can/should/must...
an application service can load an aggregate root from the event store
an application service can save new aggregate root events to the event store
an application service can establish calls to the read model (not a common practice but sometimes needed)
an application service can establish calls to external services
you can do dependency orchestration
an application service must be stateless
an application service must update only one aggregate root. Yes, you can create one aggregate and update another one but think twice before doing so.
You should not...
an application service should not update more than one aggregate root in a single command/handler
you should not place domain logic inside an application service
you should not use an application service to send emails, push notifications etc. Use a port or a gateway instead
an application service should not update the read model
A projection is a representation of an object using a different perspective. In the context of CQRS, projections are queryable models on the "read" side that never manipulate the original data (events in event-sourced systems) in any way. Projections should be designed in a way that is useful and convenient for the reader (API, UI, etc.).
Cronus supports non-event-sourced and event-sourced projections with snapshots.
To create a projection, create a class for it that inherits ProjectionDefinition<TState, TId>
. The id can be any type that implements the IBlobId
interface. All ids provided by Cronus implement this interface but it is common to create your own for specific business cases. The ProjectionDefinition<TState, TId>
base class provides a Subscribe()
the method that is used to create a projection id from an event. This will define an event-sourced projection with a state that will be used to persist snapshots.
Use the IEventHandler<TEvent>
interface to indicate that the projection can handle events of the specified event type. Implement this interface for each event type your projection needs to handle.
Create a class for the projection state. The state of the projection gets serialized and deserialized when persisting or restoring a snapshot. That's why it must have a parameterless constructor, a data contract and data members.
There is no guarantee the events will be handled in the order of publishing nor that every event will be handled at most once. That's why you should design projections in a way that solves those problems. Always assign all possible properties from the handled event to the state and make sure the projection is idempotent.
If the projection state contains a collection, make sure it doesn't get populated with duplicates. This can be achieved by using a HashSet<T>
and ValueObject
.
You can define a non-event-sourced projection by decorating it with the IProjection
interface. This is useful when you want to persist the state in an external system (e.g. ElasticSearch, relational database).
By default, all projections' states are being persisted as snapshots. If you want to disable this feature for a specific projection, use the IAmNotSnapshotable
interface.
To query a projection, you need to inject an instance of IProjectionReader
in your code and invoke the Get()
or GetAsync()
method. The returned object will be of type ReadResult
or Task<ReadResult>
containing the projection and a few properties indicating if the loading was successful.
Use separate models for the API responses from the projection states to ensure you won't introduce breaking changes if the projection gets modified.
TODO
You can/should/must...
a projection must be idempotent
a projection must not issue new commands or events
You should not...
a projection should not query other projections. All the data of a projection must be collected from the Events' data
An event is something significant that has happened in the domain. It encapsulates all relevant data of the action that happened.
You can/should/must...
an event must be immutable
an event must represent a domain event that already happened with a name in the past tense
an event can be dispatched only by one aggregate
To create an event with Cronus, just use the IEvent
markup interface.
Cronus uses the ToString()
method for logging, so you can override it to generate user-readable logs. Otherwise, the name of the event class will be used for log messages.
Sometimes called a Process Manager
In the Cronus framework, Sagas—also known as Process Managers—are designed to handle complex workflows that span multiple aggregates. They provide a centralized mechanism to coordinate and manage long-running business processes, ensuring consistency and reliability across the system.
Event-Driven Coordination: Sagas listen for domain events, which represent business changes that have already occurred, and react accordingly to drive the process forward.
State Management: Unlike simple event handlers, Sagas maintain state to track the progress of the workflow, enabling them to handle complex scenarios and ensure that all steps are completed successfully.
Command Dispatching: Sagas can send new commands to aggregates or other components, orchestrating the necessary actions to achieve the desired business outcome.
Sagas are particularly useful when dealing with processes that:
Involve multiple aggregates or bounded contexts.
Require coordination of several steps or actions.
Need to handle compensating actions in case of failures to maintain consistency.
By encapsulating the workflow logic within a Saga, developers can manage complex business processes more effectively, ensuring that all parts of the system work together harmoniously.
A Saga can send new commands to drive the process forward.
Ensure that Sagas are idempotent to handle potential duplicate events gracefully.
Maintain clear boundaries for each Saga to prevent unintended side effects.
Saga example
In the Cronus framework, Ports facilitate communication between aggregates, enabling one aggregate to react to events triggered by another. This design promotes a decoupled architecture, allowing aggregates to interact through well-defined events without direct dependencies.
Event-Driven Communication: Ports listen for domain events—representing business changes that have already occurred—and dispatch corresponding commands to other aggregates that need to respond.
Statelessness: Ports do not maintain any persistent state. Their sole responsibility is to handle the routing of events to appropriate command handlers.
Ports are ideal for straightforward interactions where an event from one aggregate necessitates a direct response from another. However, for more complex workflows involving multiple steps or requiring state persistence, implementing a Saga is recommended. Sagas provide a transparent view of the business process and manage the state across various interactions, ensuring consistency and reliability.
By utilizing Ports appropriately, developers can design systems that are both modular and maintainable, adhering to the principles of Domain-Driven Design and Event Sourcing.
Port example
Compared to a Port, which can dispatch a command, a Gateway can do the same but it also has a persistent state. A scenario could be sending commands to external BC, such as push notifications, emails, etc. There is no need to event source this state and it's perfectly fine if this state is wiped. Example: iOS push notifications badge. This state should be used only for infrastructure needs and never for business cases. Compared to Projection, which tracks events, projects their data, and is not allowed to send any commands at all, a Gateway can store and track metadata required by external systems. Furthermore, Gateways are restricted and not touched when events are replayed.
You can/should/must...
a gateway can send new commands
An issue that came up in the past was that we serialized a huge amount of information in an event. The event contained a structure that in itself had a very innocent-looking property called TimeZoneInfo
:
After releasing the software, we noticed that the project was taking up an unusually large amount of space. After checking out a couple of persisted events, we found out that each time we used the struct Cycle
, we persisted some 6200 lines of serialized json. Of which, the 6000 lines were attributed to the TimeZoneInfo
.This severely impacted event serialization and deserialization. The issue came up after we had done the following assignment
We decided that in order to lower the amount of data, we needed to migrate the event store while keeping up a live version of the old one, to avoid downtime.
In order to avoid having downtime, we decided to create a single deployable service (let's call it Migrator) that subscribed to the same events as the original application service. However, the Migrator would write the events directly in the new event store. Furthermore, the Migrator would be responsible for once it boots, to start copying data over from the old event store while applying the needed changes. In our case, we needed to modify all events that had the Cycle
in them, and replace the TimeZoneInfo
with just a TimeZoneId
which is a simple string.
We changed the structure of the Cycle to this:
Event
Domain events represent business changes that have already happened.
Triggered by
Description
Event
Domain events represent business changes which have already happened
Event
Domain events represent business changes that have already happened.
https://github.com/Elders/Cronus/issues/269
ISerializer interface is simple. You can plug your own implementation in but should not change it once you are in production.
The samples in this manual work with JSON and Proteus-protobuf serializers. very ICommand
, IEvent
, ValueObject
or anything which is persisted is marked with a DataContractAttribute
and the properties are marked with a DataMemberAttribute
. Here is a quick sample how this works (just ignore the WCF or replace it with Cronus while reading). We use Guid
for the name of the DataContract because it is unique.
You can/should/must...
you must add private parameterless constructor
you must initialize all collections in the constructor(s)
you can rename any class whenever you like even when you are already in production
you can rename any property whenever you like even when you are already in production
you can add new properties
You should not...
you must not delete a class when already deployed to production
you must not remove/change the Name
of the DataContractAttribute
when already deployed to production
you must not remove/change the Order
of the DataMemberAttribute
when deployed to production. You can change the visibility modifier from public
to private
https://github.com/Elders/Cronus/issues/266
Workflows are the center of message processing. It is very similar to the ASP.NET middleware pipeline.
With a workflow you can:
define what logic will be executed when a message arrives
execute an action before or after the actual execution
override or stop a workflow pipeline
By default, all messages are handled in an isolated fashion via ScopedMessageWorkflow
using scopes. Once the scope is created then the next workflow (MessageHandleWorkflow
) is invoked with the current message and scope. In addition, DiagnosticsWorkflow
wraps the entire pipeline bringing insights into the performance of the message handling pipeline.
The primary focus of the workflow is to prepare an isolated scope and context within which a message is being processed. Usually, you should not interact with this workflow directly.
The workflow creates an instance of IServiceScope
which allows using Dependency Injection in a familiar to a dotnet developer way. In addition, the workflow initializes an instance of CronusContext
which holds information about the current tenant handling the message.
Additionally, Cronus uses structured logging and a new log scope is created every time a new message arrives so you could co-relate log messages.
Read more about the Dependency Injection and service lifetimes if this is a new concept for you.
TODO: Explain message handling workflow responsibilities
By default, Cronus and its sub-components have good default settings. However, not everything can be auto-configured, such as connection strings to databases or endpoints to various services.
Name
Type
Required
Default Value
string
yes
string[]
yes
bool
no
true
bool
no
true
bool
no
true
bool
no
true
bool
no
true
Cronus uses this setting to personalize your application. This setting is used for naming the following components:
RabbiMQ exchange and queue names
Cassandra EventStore names
Cassandra Projection store names
Allowed Characters: Cronus:BoundedContext
must be an alphanumeric character or underscore only: ^\b([\w\d_]+$)
'
List of tenants allowed to use the system. Cronus is designed with multitenancy in mind from the beginning and requires at least one tenant to be configured in order to work properly. The multitenancy aspects are applied to many components and to give you a feel about this here is an incomplete list of different parts of the system using this setting:
Message - every message which is sent through Cronus is bound to a specific tenant
RabbitMQ exchanges and queues are tenant-aware
Event Store - every tenant has a separate storage
Projection Store - every tenant has a separate storage
Each value you provide in the array is converted and used further to lower.
Allowed Characters: Cronus:Tenants
must be an alphanumeric character or underscore only: ^\b([\w\d_]+$)
'
Example value: ["tenant1","tenant2","tenant3"]
Once set you could use TenantsOptions
object via Dependency Injection for other purposes.
Specifies whether to start a consumer for the Application Services
Specifies whether to start a consumer for the Projections
Specifies whether to start a consumer for the Ports
Specifies whether to start a consumer for the Sagas
Specifies whether to start a consumer for the Gateways
Name
Type
Required
Default Value
configurationSection
no
configurationSection
no
The API is hosted with Kestrel.
By default, the API is hosted on port 7477
.
A configuration could be provided by KestrelOptions. You can supply them directly in the DI or through a configuration file.
The API could be protected using a JWT bearer authentication.
The configuration is provided by JwtBearerOptions. You can supply them directly in the DI or through a configuration file.
Remarks: https://stackoverflow.com/a/58736850/224667
Name
Type
Required
Default Value
string
yes
string
no
simple
int
no
1
string[]
no
The connection to the Cassandra database server
Configures Cassandra replication strategy. This setting has effect only in the first run when creating the database.
Valid values:
simple
network_topology - when using this setting you need to specify Cronus:Persistence:Cassandra:ReplicationFactor
and Cronus:Persistence:Cassandra:Datacenters
as well
Name
Type
Required
Default Value
string
yes
string
no
simple
int
no
1
string[]
no
boolean
no
true
uint
no
2
Cronus:Projections:Cassandra:ConnectionString
The connection to the Cassandra database server
Cronus:Projections:Cassandra:ReplicationStrategy
Configures Cassandra replication strategy. This setting has effect only in the first run when creating the database.
Valid values:
simple
network_topology - when using this setting you need to specify Cronus:Projections:Cassandra:ReplicationFactor
and Cronus:Projections:Cassandra:Datacenters
as well
Cronus:Projections:Cassandra:ReplicationFactor
Cronus:Projections:Cassandra:Datacenters
Cronus:Projections:Cassandra:TableRetention:DeleteOldProjectionTables
Enables deletion of old projection tables
Cronus:Projections:Cassandra:TableRetention:NumberOfOldProjectionTablesToRetain
Configures Cassandra number of old projection tables -> default: live table and 2 old tables
Cronus:Transport:RabbiMQ:ConsumerWorkersCount
>> integer | Required: Yes | Default: 5Configures the number of threads which will be dedicated to consuming messages from RabbitMQ for every consumer.
Cronus:Transport:RabbiMQ:Server
>> string | Required: Yes | Default: 127.0.0.1DNS or IP to the RabbitMQ server
Cronus:Transport:RabbiMQ:Port
>> integer | Required: Yes | Default: 5672The port number on which the RabbitMQ server is running
Cronus:Transport:RabbiMQ:VHost
>> string | Required: Yes | Default: /The name of the virtual host. It is a good practice to not use the default /
vhost. For more details see the official docs. Cronus is not using this for managing multitenancy.
Cronus:Transport:RabbiMQ:Username
>> string | Required: Yes | Default: guestThe RabbitMQ username
Cronus:Transport:RabbiMQ:Password
>> string | Required: Yes | Default: guestThe RabbitMQ password
Cronus:Transport:RabbiMQ:AdminPort
>> integer | Required: Yes | Default: 5672RabbitMQ admin port used to create, delete rabbitmq resources
An implementation of Cronus.AtomicAction
using distributed locks with Redis
(Source: https://redis.io/topics/distlock)
Cronus:AtomicAction:Redis:ConnectionString
>> string | Required: YesConfigures the connection string where Redis is located
Cronus:AtomicAction:Redis:LockTtl
>> TimeSpan | Required: No | Default: 00:00:01.000Cronus:AtomicAction:Redis:ShorTtl
>> TimeSpan | Required: No | Default: 00:00:01.000Cronus:AtomicAction:Redis:LongTtl
>> TimeSpan | Required: No | Default: 00:00:05.000Cronus:AtomicAction:Redis:LockRetryCount
>> int | Required: No | Default: 3Cronus:AtomicAction:Redis:LockRetryDelay
>> TimeSpan | Required: No | Default: 00:00:00.100Cronus:AtomicAction:Redis:ClockDriveFactor
>> double | Required: No | Default: 0.01Prerequisite software: Docker
Create a new console application project in a new folder using dotnet command.
Also, create a Web API project using the same folder for communicating with our Service. Then add both projects to the common solution.
Then we add the Cronus dependency.
This is the minimum set of packages for our Cronus host to work.
Setup Cassandra (Container memory is limited to 2GB):
docker run --restart=always -d --name cassandra -p 9042:9042 -p 9160:9160 -p 7199:7199 -p 7001:7001 -p 7000:7000 cassandra
Setup RabbitMq (Container memory is limited to 512MB):
docker run --restart=always -d --hostname node1 -e RABBITMQ_NODENAME=docker-UNIQUENAME-rabbitmq --name rabbitmq -p 15672:15672 -p 5672:5672 elders/rabbitmq:3.8.3
Add appsettings.json with the following configuration into the project folder.
//This should be int the Service and in the Api.
You can also see how the Cronus application can be configured in more detail in Configuration.
This is the code that your Program.cs in TaskManager.Service should contain.
This is the code that you should add in the Program.cs in TaskManager.Api.