Command Query Responsibility Segregation (CQRS)
Implementing a Command Query Responsibility Segregation (CQRS) Architecture with Spring Boot for Improved Scalability and Flexibility
The Command and Query Responsibility Segregation (CQRS) pattern segregates the responsibilities for read and write operations. This means that the paths for reads and writes may be distinct within the application, potentially handled by different services and applied to different data stores. By separating the handling of read and write operations, CQRS can improve the scalability, performance, and maintainability of a system.
Well, that was the simple definition of CQRS from my view point. Now we will dive deep into its concepts and step by step implementation in spring boot.
CQRS Design Pattern
CQRS is a software design pattern that is rooted in the Command Query Separation (CQS) principle which was suggested by Bertrand Meyer. According to CQS principle, we should split the operations performed on domain objects into two separate categories: Queries and Commands. A Command is used to execute a task, while a Query is used to retrieve information. It is important to ensure that these two functionalities are not combined into a single function.
While CQS is a general principle of behavior that suggests separating commands from queries, CQRS is a more extensive pattern that encompasses this principle within a broader system architecture.
In simple terms, CQRS involves separating the command and query aspects of an application’s architecture, which results in a vertical division of the application logic. This approach involves isolating the handling of state mutation, or commands, from the retrieval of data, or queries.
Queries return a result without changing the observable state of a system, whereas commands are responsible for changing the state of the system without necessarily returning a value.
Why CQRS?
In typical applications, reading and modifying data are both common tasks, often expressed as CRUD (Create, Read, Update, Delete) operations. However, as applications become more complex, the logical model can become more complicated and structured, potentially impacting the user experience. To create a scalable and maintainable application, it is important to reduce constraints between the read and write models.
CQRS addresses this issue by separating the responsibilities of handling commands (updates) and queries (reads). This allows each model to be optimized for its specific purpose, improving performance and scalability. CQRS also provides flexibility, making it easier to add new features or make changes to the system without impacting other parts of the application. However, CQRS is not necessary for every system and is most useful for complex systems with high performance requirements.
Known Problems Of CQRS
While the Command Query Responsibility Segregation (CQRS) pattern offers many potential benefits, there are tradeoffs to using this technique:
- Increased complexity: CQRS introduces additional complexity to the architecture of a system, as it requires separate models and handling mechanisms for read and write operations. This can make development and maintenance more challenging.
- Eventual consistency: Because the read and write models are separate, there may be a delay between when data is written and when it is available to read. This can result in eventual consistency, where read operations may return data that is slightly out of date.
- Data synchronization issues: Maintaining consistency between the write and read models can be challenging. This is especially true in distributed systems, where data must be synchronized across multiple nodes. This issue can be reduced by adopting simple CQRS pattern with shared database for read and write operations.
- Increased testing complexity: Testing CQRS-based systems can be more complex than traditional architectures, as separate testing mechanisms may be required for the read and write models.
Despite these challenges, many organizations have successfully implemented CQRS to achieve improved scalability, performance, and maintainability. It is important to carefully consider the trade-offs and evaluate whether CQRS is the right choice for a given system.
Implementation Approaches and Considerations
Let us discuss some of the popular design approaches to implement CQRS , taking into account both data consistency and data synchronization:
Separate read and write data models
One of the benefits of applying CQRS is that you can have different representations of your data. Your write model may look very different than your read model.
However, this doesn’t mean you need to have different data sources and use event handlers to build your query model. The Separate read and write data models pattern in CQRS involves dividing the domain model into separate models for handling read and write operations. This process involves more than just changing the API for accessing domain functions; it also requires a redesign of how these functions are structured and implemented.
If you’re using a relational database, you can still get the benefits of tailored query models by mapping your read models to database views or materialized views. By doing so, you can avoid the complexities of eventual consistency and event handlers for updating your read/query database. This approach ensures that your command/write model and query/read models are always consistent, eliminating any concerns about eventual consistency.
From my experience, when applied wrong, eventual consistency can be a giant pain and not at all what your users are expecting. It’s important to choose the proper CQRS design based on your requirements. When applied incorrectly, eventual consistency can lead to unexpected issues and challenges. Therefore, it’s important to carefully consider your design choices to ensure that they align with your specific needs and goals.
Separate read and write databases
In a single-source data design, one database is the target of both read and write activity. In the separate read and write database CQRS approach, read and write activity is separated among a variety of data sources, i.e., as part of this, we’ll separate the domain model and its persistence to handle write and read operations.
CQRS allows each data source to be optimized for its specific use case. For example, a relational database like Postgres or MySQL might be better suited for handling write data, while a document database like MongoDB might be more suitable for storing and displaying hierarchical data on an eCommerce site.
By separating read and write activities, CQRS can help improve the performance and scalability of an application, as well as enable the use of specialized data storage technologies that are best suited for each type of activity. Even though separates write storage from read storage, can offer great flexibility and efficiency for web-scale applications. there is a significant challenge that must be addressed — data synchronization.
Since write and read data are stored in separate databases, it’s essential to ensure that the read database is kept up-to-date with changes made to the write database. Failing to synchronize the data effectively can lead to inconsistencies between the write and read databases, resulting in incorrect or outdated information being presented to users. The upcoming approaches will have some effective synchronization strategy.
Overall, while the CQRS pattern offers significant benefits, data synchronization is a crucial factor that must be considered and managed effectively to ensure the success of the application.
Implement message-based communication
Message-based communication is an essential aspect of implementing CQRS applications to ensure data consistency across different models. When a command is executed in the write model, a message is sent to the read model to update its data, ensuring that both models are always consistent and up-to-date.
To implement message-based communication, a messaging system must be set up that enables the exchange of commands and events between various components of the system. It is recommended to use an Event Bus System based on a message queue or pub/sub model. An event bus system like Kafka is ideal because it stores the events/messages, which can be resent to subscribers if their intended purpose was not fulfilled due to any network communication issues.
This approach improves performance and scalability, making it essential for building robust and reliable CQRS applications. However, if storing all the events in a data store for a longer period is necessary, Event Sourcing must be used. This will allow for the regeneration of the same database state in case of any failures in the read or write databases by using the stored events.
Use event sourcing
What is Event sourcing? — Event sourcing is a pattern used in software development that involves storing every change made to an application’s state as a series of immutable events. Instead of storing the current state of the application, event sourcing maintains a log of all changes made to the system over time. This approach can help with auditing, debugging, and tracking changes to data.
When an event is created, it represents a change that has occurred in the system. The event is then stored in an event store, which acts as a database of events. Each event in the event store is immutable and contains all the information needed to reconstruct the state of the system at a given point in time.
Event sourcing is often used in conjunction with CQRS, to create scalable and maintainable software systems. In CQRS, we store all the events in a datastore so that in the future when read and write databases get out of sync or the data source gets destroyed, then replaying those event logs again, can help us regenerate the database(s) state.
When And Why To Use CQRS?
- Scenarios where the volume of read operations is significantly higher than write operations.
In such scenarios, CQRS pattern segregates the responsibilities for read and write operations to different logical or physical services with shared or separate data models . You do not have to conflate your write schema with your query schema. You no longer need to lock table/record for updates which can impact performance. This is most efficient in pattern where read and write uses separate data models.
- Where one team of developers can focus on the write side and another team can focus on the read side. There is nothing limiting CQRS to a single read-side: multiple independent read projections may coexist, and therefore, multiple teams may be assigned to the task.
This is mainly applicable in cases where read and write will be physically separate services. Here the responsibility of scaling and optimization of command and query parts separately lies in the team responsible for that particular microservice.
- Cases where the access patterns for writing vary significantly from those for reading, especially when the number of reads is much greater than the number of writes.. For example, transactional writes vs reporting and analytics on the read side. In this scenario, you can scale out the read model, but run the write model on just a few instances.
In CQRS pattern we can add support for new queries by creating a new materialized view without impacting performance and availability of your write store or other query stores. Also we can think of customization a step further and use different storage technologies depending on the requirements of a query. For example, certain queries might be better served by a graph or NoSQL database. Depending up on the use case it may be better to keep the read data NoSQL to avoid complex joins.
CQRS with Spring boot and Axon Server
Here we will consider order system which is a major component in an ecommerce application. For the sake of learning purpose we can consider this module as part of a large scale application with high read and write operation hence designed in CQRS pattern. Considering this, we will split the order service into two — order-command-service and order-query-service there by we can segregate the read and write operation separately and also scale them separately. However, for simplicity we will consider database to be shared between read and write service.
The above figure represent the schematic diagram on how a CQRS communication happens in a order service leveraging event sourcing.
The application referred in this blog uses the following technology stack:
- Spring Boot for building and packaging the application
- Axon framework, a heavily inspired by Domain-Driven-Design concepts, with Spring for CQRS and Event Sourcing. Axon is an open source CQRS framework for Java which provides implementations of the most important building blocks, such as aggregates, repositories and event buses that help us build applications using CQRS and Event sourcing architectural patterns.
With this background, let us start building the sample application. You can find the complete source code in the GitHub.
Implementation Details
The system depicted in the diagram treats each Create/Update request as an event that is stored in Axon’s event store.
Let’s understand what’s happening in this diagram:
- We have a command handler. Basically, all command requests are received here through command gateway.
- The command processing part takes care of handling all the commands and generating appropriate events. The events are persisted in the event store. Of course, validations and enforcement of business rules are performed before the events are persisted. Also, after the events are persisted, they are published on a message queue based on the requirement. In Axon’s terminology Aggregate and Event Handler part comes under command processing
- The event store, Axon to store all events.
- Messaging service, Kafka to handle service level communication by triggering payment service to handle new_payment_event. This will be handled in the payment service and order status will get updated from PENDING to PROCESSED
- The Query Processing application listens to the events. Basically, this application takes the event payload and persists the data in the query store based on the required read models.
- The query handler part handles the incoming read requests. It retrieves the data from the query store using Projection and outputs it.
Let’s look at some of these concepts and its implementation in detail:
Application Examples — Spring boot & Axon Framework
We will use Axon Server to be our Event Store and our dedicated command, event and query routing solution.
To begin using Axon Framework in your application, you can start by downloading from here and run an instance on your local machine, which can be accessed through the endpoint at localhost:8024.
Since Axon Server is a simple JAR file, the following operation suffices to start it up:
java -jar axonserver.jar
This endpoint provides valuable information about the connected applications and the messages they can handle, as well as a querying mechanism for the Event Store within Axon Server.
When using the axon-spring-boot-starter dependency together with the default configuration of Axon Server, your Order service will be automatically connected to the Axon Server instance, enabling seamless communication with other Axon-based applications. To do so, simply include Axon Spring Boot Starter as a dependency in a Spring Boot project.
<!-- For Maven -->
<dependency>
<groupId>org.axonframework</groupId>
<artifactId>axon-spring-boot-starter</artifactId>
</dependency>
// for gradle
implementation("org.axonframework:axon-spring-boot-starter:4.7.3")
Creating the Event Sourced Entity
In this ordering system, we will create OrderAggregate, this entity will act as our use-case to demonstrate Event Sourcing.
@Aggregate
public class OrderAggregate {
@AggregateIdentifier
private String orderId;
private String customerId;
private double orderAmount;
private String status;
private String invoiceNumber;
private List<Product> products;
private String transactionId;
private ShippingAddress shippingAddress;
private BillingAddress billingAddress;
Important things to note here are the two annotations.
- @Aggregate annotation tells Axon that this entity will be managed by Axon. Basically, this is similar to @Entity annotation available with JPA. However, we will be using the Axon recommended annotation.
- @AggregateIdentifier annotation is used for the identifying a particular instance of the Aggregate. In other words, this is similar to JPA’s @Id annotation
Modeling the Commands and Events
Axon works on the concept of commands and events.
- Commands are actions or write operations that can change the state of your aggregate.
- Events are the actual changing of that state.
Now, considering our Order aggregate, there could be many commands and events possible. However, we will try and model some important ones.
The primary commands would be Create Order Command and Update Order Command. Based on them, the corresponding events that can occur are Order Create Event and Order Update Event.
First we will consider creating Create Order Command and understand its details.
@Data
public class CreateOrderCommand {
@TargetAggregateIdentifier
private String orderId;
private String customerId;
private String createdDate;
private String updatedDate;
private Double orderAmount;
private String status;
private String invoiceNumber;
private String transactionId;
private String createdBy;
private String updatedBy;
private List<Product> products;
private ShippingAddress shippingAddress;
private BillingAddress billingAddress;
}
The most important thing to note here is the @TargetAggregateIdentifier annotation. Basically, this is an Axon specific requirement to identify the aggregate instance. In other words, this annotation is required for Axon to determine the instance of the Aggregate that should handle the command. The annotation can be placed on either the field or the getter method. In this example, we chose to put it on the field.
Similar to Create Order Command we can create Update Order Command also. Next step is to implements the events.
@Data
@NoArgsConstructor
@AllArgsConstructor
public class OrderCreateEvent {
private String orderId;
private String customerId;
private String createdDate;
private String updatedDate;
private double orderAmount;
private String status;
private String invoiceNumber;
private String createdBy;
private String updatedBy;
private List<Product> products;
private String transactionId;
private ShippingAddress shippingAddress;
private BillingAddress billingAddress;
}
The event class Order Create Event is a simple POJO class. For simplicity only Creation command and events class’s are shown here, full code will be available in GitHub.
Command Handlers and Event Handlers
Now that we successfully modeled the commands and events, the next step is to implement handlers for them. These handlers are methods on the Aggregate that are responsible for processing a particular command or event.
Since the handlers are closely related to the Aggregate, it’s best practice to define them within the Aggregate class itself. Also, the command handlers often need to access the state of the Aggregate.
For our specific use case, we will define these handlers within the OrderAggregate class. You can see the complete implementation of the OrderAggregate class below.
@Slf4j
@Aggregate
public class OrderAggregate {
@AggregateIdentifier
private String orderId;
private String customerId;
private String createdDate;
private String updatedDate;
private double orderAmount;
private String status;
private String invoiceNumber;
private String createdBy;
private String updatedBy;
private List<Product> products;
private String transactionId;
private ShippingAddress shippingAddress;
private BillingAddress billingAddress;
public OrderAggregate() {}
@CommandHandler
public OrderAggregate(CreateOrderCommand createOrderCommand) {
if (Objects.isNull(createOrderCommand.getOrderId()))
throw new IllegalArgumentException("Order id should be present...!");
if (createOrderCommand.getOrderAmount() <= 0)
throw new IllegalArgumentException("Price cannot be less or equal to zero...!");
if (Objects.isNull(createOrderCommand.getProducts()))
throw new IllegalArgumentException("Product should be present...!");
createOrderCommand.setCreatedDate(LocalDateTime.now().toString());
log.debug("Handling {} command: {}", createOrderCommand.getClass().getSimpleName(),
createOrderCommand.getOrderId());
OrderCreateEvent orderCreateEvent = new OrderCreateEvent();
BeanUtils.copyProperties(createOrderCommand, orderCreateEvent);
AggregateLifecycle.apply(orderCreateEvent);
log.trace("Done handling {} event: {}", orderCreateEvent.getClass().getSimpleName(),
orderCreateEvent.getOrderId());
}
@CommandHandler
public void handleUpdateOrderCommand(UpdateOrderCommand updateOrderCommand) {
if (Objects.isNull(updateOrderCommand.getOrderId()))
throw new IllegalArgumentException("Order id should be present...!");
if (Objects.isNull(updateOrderCommand.getStatus()))
throw new IllegalArgumentException("Order status should be present...!");
log.debug("Handling {} command: {}", updateOrderCommand.getClass().getSimpleName(), updateOrderCommand.getOrderId());
OrderUpdateEvent orderUpdateEvent = new OrderUpdateEvent();
orderUpdateEvent.setOrderId(updateOrderCommand.getOrderId());
orderUpdateEvent.setStatus(updateOrderCommand.getStatus());
orderUpdateEvent.setTransactionId(updateOrderCommand.getTransactionId());
AggregateLifecycle.apply(orderUpdateEvent);
log.trace("Done handling {} event: {}", orderUpdateEvent.getClass().getSimpleName(), orderUpdateEvent.getOrderId());
}
@EventSourcingHandler
public void on(OrderCreateEvent orderCreateEvent) {
log.debug("Handling {} event: {}", orderCreateEvent.getClass().getSimpleName(), orderCreateEvent.getOrderId());
this.orderId = orderCreateEvent.getOrderId();
this.billingAddress = orderCreateEvent.getBillingAddress();
this.createdBy = orderCreateEvent.getCreatedBy();
this.createdDate = orderCreateEvent.getCreatedDate();
this.customerId = orderCreateEvent.getCustomerId();
this.invoiceNumber = orderCreateEvent.getInvoiceNumber();
this.orderAmount = orderCreateEvent.getOrderAmount();
this.products = orderCreateEvent.getProducts();
this.shippingAddress = orderCreateEvent.getShippingAddress();
this.status = orderCreateEvent.getStatus();
this.updatedBy = orderCreateEvent.getUpdatedBy();
this.updatedDate = orderCreateEvent.getUpdatedDate();
this.transactionId = orderCreateEvent.getTransactionId();
log.trace("Done handling {} event: {}", orderCreateEvent.getClass().getSimpleName(), orderCreateEvent.getOrderId());
}
@EventSourcingHandler
public void on(OrderUpdateEvent orderUpdateEvent) {
log.debug("Handling {} event: {}", orderUpdateEvent.getClass().getSimpleName(), orderUpdateEvent.getOrderId());
this.orderId = orderUpdateEvent.getOrderId();
this.status = orderUpdateEvent.getStatus();
this.transactionId = orderUpdateEvent.getTransactionId();
log.trace("Done handling {} event: {}", orderUpdateEvent.getClass().getSimpleName(), orderUpdateEvent.getOrderId());
}
}
As you can see, we are handling the two commands in their own handler methods. These handler methods should be annotated with @CommandHandler annotation.
One thing to note here is one of the command handler must be the constructor of OrderAggregate.
The handler methods use AggregateLifecyle.apply() method of Axon to register events.
These registered events, in turn, are handled by methods annotated with @EventSourcingHandler annotation. It is crucial to perform all state changes related to an event sourced aggregate within the annotated methods.
Another important thing to point out here is the no-args default constructor. This constructor is used by Axon to create an empty instance of the aggregate, failure to include this constructor will result in an exception.
The Order Create event and Order Update event that were defined previously will be processed by the OrderCreateEventHandler and OrderUpdateEventHandler, correspondingly. These event handlers will be utilized for business logic purposes, such as saving data to a database, triggering messages, and so on.
Let us see how OrderCreateEventHandler looks in our application:
@Slf4j
@Component
@RequiredArgsConstructor
public class OrderCreateEventHandler {
private final OrderRepository orderRepository;
private final PaymentEventPublisher paymentEventPublisher;
@EventHandler
public void on(OrderCreateEvent orderCreateEvent) throws JsonProcessingException {
log.debug("Handling {} event: {}", orderCreateEvent.getClass().getSimpleName(), orderCreateEvent.getOrderId());
Order order = new Order();
BeanUtils.copyProperties(orderCreateEvent, order);
ShippingAddress shippingAddress = new ShippingAddress();
BillingAddress billingAddress = new BillingAddress();
BeanUtils.copyProperties(orderCreateEvent.getShippingAddress(), shippingAddress);
order.setShippingAddress(shippingAddress);
BeanUtils.copyProperties(orderCreateEvent.getBillingAddress(), billingAddress);
order.setBillingAddress(billingAddress);
var productEntities = orderCreateEvent.getProducts().stream()
.map(dto -> new Product(dto.getProductId(), dto.getProductName(), dto.getQuantity(), dto.getPrice()))
.toList();
order.setProducts(productEntities);
orderRepository.save(order);
paymentEventPublisher.initiatePaymentEvent(order);
log.trace("Done handling {} event: {}", orderCreateEvent.getClass().getSimpleName(), orderCreateEvent.getOrderId());
}
}
In the OrderCreateEventHandler presented above, the order information is stored in the write database of the Order Service, and a payment event message is published in Kafka. While Kafka integration is not mandatory in a CQRS pattern, we use Kafka in this application for ensuring the scalability and reduce coupling of the application. Therefore, the payment service is initiated for payment via an event-driven mechanism using Kafka. The detailed code for the payment publisher can be found in the GitHub repository.
The OrderUpdateEventHandler will save the order status in the database after payment status update by the payment-service. Lets see the code for OrderUpdateEventHandler
@Slf4j
@Component
@RequiredArgsConstructor
public class OrderUpdateEventHandler {
private final OrderRepository orderRepository;
@EventHandler
public void on(OrderUpdateEvent orderUpdateEvent) {
log.debug("Handling {} event: {}", orderUpdateEvent.getClass().getSimpleName(), orderUpdateEvent.getOrderId());
Optional<Order> optionalOrder = orderRepository.findByOrderId(orderUpdateEvent.getOrderId());
if (optionalOrder.isPresent()) {
Order order = optionalOrder.get();
order.setStatus(orderUpdateEvent.getStatus());
order.setUpdatedDate(LocalDateTime.now().toString());
orderRepository.save(order);
}
log.trace("Done handling {} event: {}", orderUpdateEvent.getClass().getSimpleName(), orderUpdateEvent.getOrderId());
}
}
Defining the Controllers for REST API
To enable a consumer to interact with our application, it is necessary to expose some interfaces, which can be achieved through Spring Web MVC that offers robust support for this purpose.
We need two controllers; first one is to handle the commands in order-command-service:
@Slf4j
@RestController
@RequestMapping("/orders")
@RequiredArgsConstructor
@Api(value = "Order Commands", tags = "Order Commands")
public class OrderCommandController {
private final OrderCommandService orderCommandService;
@PostMapping
public ResponseEntity<?> createOrder(@Valid @RequestBody OrderDto orderDto) throws ExecutionException,
InterruptedException {
CreateOrderCommand createOrderCommand = new CreateOrderCommand();
BeanUtils.copyProperties(orderDto, createOrderCommand);
log.debug("Processing CreateOrderCommand: {}", createOrderCommand);
CompletableFuture<String> order = orderCommandService.createOrder(createOrderCommand);
if (Objects.isNull(order))
throw new OrderCreationException("Order Creation failed.");
return ResponseEntity.status(HttpStatus.CREATED)
.body(successResponse(order.get(), "Order created successfully...!"));
}
@PatchMapping
public ResponseEntity<?> updateOrder(@RequestBody OrderDto orderDto) throws ExecutionException, InterruptedException {
if (Objects.isNull(orderDto.getOrderId())) {
throw new IllegalArgumentException("Invalid payload....! ");
}
UpdateOrderCommand updateOrderCommand = new UpdateOrderCommand();
BeanUtils.copyProperties(orderDto, updateOrderCommand);
CompletableFuture<String> order = orderCommandService.updateOrder(updateOrderCommand);
if (Objects.isNull(order))
throw new OrderCreationException("Order Creation failed.");
return ResponseEntity.status(HttpStatus.CREATED)
.body(successResponse(order.get(), "Order saved successfully...!"));
}
@DeleteMapping
public ResponseEntity<?> cancelOrder(@PathVariable String orderId) throws ExecutionException, InterruptedException {
if (Objects.isNull(orderId)) {
throw new IllegalArgumentException("Invalid payload....! ");
}
throw new RuntimeException("Cancel order functionality not implemented");
}
}
Similar to OrderCommandController we can define OrderQueryController to handle queries in order-query-service:
@Slf4j
@RestController
@RequestMapping("/orders")
@RequiredArgsConstructor
@Api(value = "Order Queries", tags = "Order Queries")
public class OrderQueryController {
private final OrderQueryService orderQueryService;
@GetMapping
public CompletableFuture<OrderResults> retrieveAllOrders(
@RequestParam(required = false, name = "page") Integer page,
@RequestParam(required = false, name = "size") Integer size,
@RequestParam(name = "sort", required = false) String sort)
{
return orderQueryService.retrieveAllOrders(page, size, sort);
}
@GetMapping("/{orderId}")
public ResponseEntity<?> getOrderById(@PathVariable("orderId") String orderId)
throws ExecutionException, InterruptedException
{
CompletableFuture<Optional<OrderDto>> orderById = orderQueryService.getOrderById(orderId);
OrderDto orderDto = orderById.get().orElseThrow(() -> new NoSuchElementException("Order not found."));
return ResponseEntity.ok(orderDto);
}
@GetMapping("/{orderId}/events")
public ResponseEntity<?> getOrderEventsById(@PathVariable("orderId") String orderId)
throws ExecutionException, InterruptedException
{
List<Object> orderEvents = orderQueryService.getOrderEventsById(orderId);
return ResponseEntity.ok(orderEvents);
}
}
As depicted, the controllers primarily rely on the Services classes for its business logic. Upon receiving the payload, the service forwards it to the CommandGateway to execute the appropriate commands. The output from the endpoint is then mapped back to the response.
Let us see how the service classes forwards requests to the command gateway of Axon server.
The Service Layer
Basically we try to follow SOLID principles. Therefore, we will code to interfaces. We will have two service interfaces. First is OrderCommandService to handle the Commands in order-command-service project and OrderQueryService in order-query-service.
Now, let us see the implemntation details of both service; first is the OrderCommandServiceImpl.
@Slf4j
@Service
@RequiredArgsConstructor
public class OrderCommandServiceImpl implements OrderCommandService{
private final CommandGateway commandGateway;
public CompletableFuture<String> createOrder(CreateOrderCommand createOrderCommand) {
log.debug("Processing CreateOrderCommand: {}", createOrderCommand);
createOrderCommand.setOrderId(UUID.randomUUID().toString());
log.info("CreateOrderCommand assigned with new Order Id: {} ", createOrderCommand.getOrderId());
return commandGateway.send(createOrderCommand);
}
public CompletableFuture<String> updateOrder(UpdateOrderCommand updateOrderCommand) {
log.debug("Processing UpdateOrderCommand: {}", updateOrderCommand);
return commandGateway.send(updateOrderCommand);
}
}
The key point to highlight is the CommandGateway, which is a useful interface supplied by Axon for sending commands. By wiring up the CommandGateway, Axon automatically provides the DefaultCommandGateway implementation.
With the CommandGateway’s send method, commands can be sent and responses can be received.
Bellow is the implementation for OrderQueryServiceImpl. Similar to CommandGateway’ in OrderCommandServiceImpl here we will use QueryGateway to retrive data from Axon evnt store.
@Slf4j
@Service
@RequiredArgsConstructor
public class OrderQueryServiceImpl implements OrderQueryService {
private final QueryGateway queryGateway;
@Override
public CompletableFuture<OrderResults> retrieveAllOrders(Integer page, Integer size, String sort) {
GetOrderQuery getOrderQuery = GetOrderQuery.builder().page(page).size(size).sort(sort).build();
return queryGateway.query(getOrderQuery, ResponseTypes.instanceOf(OrderResults.class));
}
@Override
public CompletableFuture<Optional<OrderDto>> getOrderById(String orderId) {
return queryGateway.query(orderId, ResponseTypes.optionalInstanceOf(OrderDto.class));
}
}
Conclusion
In conclusion, CQRS is a pattern that can partition the internals of your application into a read and a write side, allowing you to design and optimize the two paths independently. Implementing CQRS in your application can maximize its performance, scalability and security, with some trade-offs in data consistency and architectural complexity.
The entire application code is available on this Github repository for reference.