The world is event-driven, so your project should be too

Featured image

Photo by Thaddaeus Lim on Unsplash

1. Problem

Sometimes as a developer, you need to implement a system for a power plant, a banking system, or anything that is not a hello-world project but has many moving parts (like integration points, different internal projects created by various teams).

Software created without a proper technical plan designed to accommodate changing requirements is easy to start with. However, after a while, the speed of adding new features degrades and programmers are challenged. Why did it happen? The “Big Ball of Mud” is a software architecture that is ad-hoc: full of patches, bugs, and tension between different parts of the team.

We can do better!

2. Monolith - Partial Solution

We can stop adding features and think about why we slowed down and why our software quality downgraded. Was it because of new joiners? Was it because of unrealistic business requirements? Why don’t users like our projects? We can ask these and more questions like that to develop a better solution which is a Monolith. It can be similar to the Big Ball of Mud without much thinking and architecture vision, or it can be better prepared. In general, there are two options here: We can come up with a Layered monolith, where we split the project(s) into different concerns: DB, view, controller, service, connectors, adapters, etc. Alternatively, we may like to pack everything into modules and develop a Modular monolith, to highlight features that show we have minimum accidental technical complexity and only necessary things to deliver business requirements.

Why may it not work? It may fail because many monoliths will eventually need to talk to each other, especially in a successful organization. Businesses would like to have synergy between projects. Ad-hoc integration between two different teams/projects within the same organization can be as painful, surprisingly even more than external services. At least external service is shipped and offered to other users via well defined, documented API.

3. Event-Driven (EDA) come to the rescue

When we think about our problems with ad-hoc integration within our organizations, we realized teams reinvented wheels. All teams are doing more or less the same tasks: monitoring, alerting, notifications, scheduling, etc. At the same time, we observe silos of logic with some teams, whereas others are small and cannot get through corporate bullshit to get some data they need from other teams.

The common patterns discovered by different teams are summed up by The Reactive Manifesto, which described the need for reactive systems, which increased together with Cloud, Mobile apps and IoT.

3.1 Evolution

The changed in implementing Software happened with the change of deployment. The industry has chosen horizontal scaling, where small, not so powerful machines are connected with clusters and where some nodes can appear and disappear without disturbing the main application. Cloud revolutionized Software.

Simultaneously with the new way of deploying and maintaining our applications, we learned that the problem we had with modular or layered monolith was a tight coupling. Tight coupling kills efficiency in development and in production. It’s against the Reactive Manifesto. However, to be fair, there must be some level of coupling.

Our system before event-driven looks like this:

A->B // A sends data to B whenever A feels it is suitable. It expects B will always be ready to receive the data.

Component A needs to know component B at any time it wants to send something. If we consume tweets or telemetry, we will talk to component B thousands of times per second. We need something better to make the system more stable, more responsive, and more resilient.

The new design is:

- A->event store // We publish data (events) to some addressable place
- B->event store // Whoever needs this data subscribe to this addressable place and consume what we published.

3.2 Events

Such a subtle change from A->B to A->something, and B->something change a lot. Suddenly building data pipelines, machine learning, data analysis, pivoting with ideas, enable/disable features, efficiently using multiple teams is possible.

In general, we categorize data into three categories:

  • Fact Events - they are facts about our domain, something which happened: some telemetry changed.
  • Commands - these are also events, but different types. It is an ask to do something. - these are also events, but different type. It is an ask to do something
  • Queries - Whenever we need to see some internal state or multiple aggregate states, we call questions. They should not cause any side effects.

3.3 Implementation

To enable all of the good things of EDA (Event-Driven Architecture), we need to agree on some naming convention for the “something” which replace our “A->B” architecture. We will use a publisher-subscriber pattern which requires us to publish our data on a topic. The very first step is to define what would be our naming convention for topics.

For example: orders/{productGroup}/{action}/{version}/{area}/{deliveryMode}/{customerID}/{productID}/{orderID}

For our machine learning model, we can subscribe to and receive all the current beer orders in such defined topic names. Do we need to report how much we earned from sold beers? No problem, we subscribe to: billing/beer/paid/v1

The topic names reflect our business structure. For example, we can use: <domain>.<classification>.<description>.<version>, where - domain can be: - comms - all events around device communications - fleet - all events relating to our fleet management - identity - some security stuff around authentication etc. - classification - fct - fact events - cmd - commands - sys - internal topics - description - type of data: customers, invoices, users etc.

For example:analytics.fct.pageviews.0, comms.fct.gps.0

It is essential to disable auto-creating of topics, so teams follow the naming convention.

Apache Kafka can be an excellent choice for this implementation as it helps in building event-driven systems. It let to decouple components and enable much easier integration within the organization. It is fault-tolerant with some concepts inherited with HDFS, HBase, Cassandra. It provides linear scale-out, replayable log, fast reads and writes scalability and durability.

3.4 It can be tough

Implementation is not easy because it requires a lot of discipline (e.g. topic naming convention). Organizations need to invest in Domain experts who understand how to split the logic and find out which teams are responsible for which parts, and make sure they use proper business language in the events.

Some patterns may be necessary to meet business or non-functional requirements, like CQRS for separate scaling for reading and writing, Saga for data consistency between different services, Event sourcing for fast write, and audibility.

Refactoring the already existing systems requires lots of DDD / Architecture knowledge. The rewriting of the monolith that is already present in the Production to Event-Driven style requires a step-by-step approach. In the first step, the Event-carried state transfer pattern helps in switching between tight-coupling (point to point) integration to message-driven loosely coupled, publisher-subscriber style.

In Phase 2, we use the Strangling Fig pattern and Event Collaboration pattern to move the functions from monolith (legacy place) to the new system.

In the last phase (Phase 3), we make sure we have well defined Bounded Context if we decided to use microservice architecture, and we refactor we split all the functions into Query and Command sides. In each context, we keep Event Store for the events from contextualized microservice. Domain experts need to help create a ubiquitous language in naming fields, events, commands, queries, methods and functions within each service.

4. Drawbacks

EDA is only one solution. There are many of them, and not all organizations are ready to use them. Suppose the organization doesn’t want to come up with a common plan for how the business operates. In that case, the topic names and technology won’t be agreed upon, and some teams won’t push their data to well accessible topics, which defeat the purpose of Event-Driven architecture.

In the same time, to agree on common thing in big companies require time and money. Skipping some of the necessary prerequisites for Event-Driven architecture can end up with disastrous architecture which would mix all different styles: Big Ball of Mud via Kafka - using frameworks or tools does not mean we will be effective.

Losing good developers, DevOps, or Domain experts can cause a hole in the overall architecture and stop the whole department/business domain.

5. References

  1. https://www.reactivemanifesto.org/
  2. https://medium.com/swlh/monolith-to-event-driven-microservices-with-apache-kafka-6e4abe171cbb
  3. https://devshawn.com/blog/apache-kafka-topic-naming-conventions/
  4. Book: Designing Event-Driven Systems by Ben Stopford