Software architecture is about managing change

We have previously addressed in our blog the responsibilities in implementing software architecture and how the architecture affects the overall success of a software project. Software architecture refers to the design and development of software as a whole. It defines, among other things:

  • How is the software divided into parts, such as components and layers?
  • What are the responsibilities and dependencies of these parts?
  • How do these parts communicate with each other?

While software architecture is typically seen as a high-level concept focused on the overall structure of the software, it also influences low-level decisions. For instance, coding practices are a crucial aspect of the architecture, as they help define the roles and dependencies of individual components. As a result, the architecture cannot be fully captured by a simple diagram of arrows and boxes; it encompasses far more than visual representations alone.

Naturally, designing software also requires decisions regarding technologies. However, the selection of technologies is not the most important aspect of software architecture. In fact, a well-crafted software architecture minimizes its dependence on specific technologies, treating them largely as implementation details. Architecture is more about abstractions.

In particular, software architecture aims to ensure that changes can be made to the software both during development and after deployment — including adjustments to the underlying technologies. This way, the architecture can also respond to the changing requirements of the program. If the system cannot be modified later, it gradually turns from software into hardware.

The inevitability of change

Before the architecture can be defined, the software development team must first have a solid understanding of the software’s requirements. These requirements are influenced by several factors, including:

  1. The application’s users and stakeholders
  2. Use cases and the functional requirements derived from them
  3. Non-functional requirements

However, nothing is as certain in a software project as change. Stories of programming projects where the requirements were perfectly defined in advance are mostly heard in folklore. Why are requirements so prone to change, then? Can’t they simply be confirmed with sufficiently good upfront planning?

There are many reasons why requirements change. Business needs evolve constantly, meaning that over time, different things might be desired from the application than originally planned. For example, competitors’ solutions may pressure a reevaluation of the requirements. Likewise, requests and feedback from clients and end users during the project often lead to reconsideration. Laws and regulations concerning the software can also change during its lifecycle.

At the start of a project, the client may not even have a complete understanding of their own needs or the full scope of the problem that needs solving. This understanding often develops and clarifies as the project advances. Furthermore, there may be numerous stakeholders both within and outside the client’s organization, which can complicate communication and introduce bureaucratic challenges in identifying the real requirements. Stakeholders may change over the course of the project, and misunderstandings can further complicate the process of requirement definition.

The client may also have an incomplete understanding of how technology can meet their needs or what it can truly enable. Even the software development team might not have full clarity at the outset. Additionally, the technology itself may evolve during the project, along with the team’s understanding of it.

Requirement changes can arise from several different sources.

A concrete example of requirement changes

Imagine we are designing a globally operating online ticketing platform. On this platform, event organizers can create events, and users from around the world can purchase tickets to these events. The system must be able to serve large numbers of users stably, with low latency and minimal downtime.

While developing such a ticket store, various requirement changes may be encountered. For instance, midway through the project, it might be discovered that a competing application has implemented an advanced event recommendation feature that users find particularly valuable. To maintain competitiveness, the development team may need to address and incorporate similar functionality.

Feedback from end users may, on the other hand, highlight that the initially planned payment options are too limited, as not all widely-used credit cards in certain countries are supported. Event organizers might also bring up a new requirement: they want to advertise other products and services in the ticket store alongside the events, something that hadn’t been considered initially. Additionally, the company developing the ticketing system may realize the need for more comprehensive analytics to reliably track user trends.

It may be discovered during the development of the application that a chosen technology is insufficient for the needs: for example, React Native, selected for the mobile app implementation, may be found limiting in terms of feature support, necessitating a migration to native applications. Later on, a price comparison might reveal that the services required by the application would be significantly cheaper on AWS when scaling to hundreds of thousands of users, compared to Google Cloud, where the system was originally set up. As a result, there may be a need to switch providers.

In some of the target countries for the ticketing app, legislation may change during the development process, impacting the software’s data handling and security requirements. Similarly, a security audit may uncover deficiencies that were not anticipated during the application’s design.

Such changes are surely familiar to any software development team. Often, the biggest challenges in a software development project stem from them. If the development process has locked into rigid technical and architectural choices from the outset, adapting the software to these new requirements may require significant additional work.

On the other hand, it’s important to remember that change is the fundamental nature of software development: if software weren’t inherently ‘soft’—that is, if it couldn’t be altered and adapted to new requirements—it might as well be developed as hardware.

How does architecture prepare for changes?

A well-designed architecture makes the software more flexible and easier to adapt to new or changing requirements. Software architecture should address at least the following three questions:

  1. What are the responsibilities of the software, and how are they distributed?
  2. Where are the internal and external interfaces of the software located?
  3. Which parts of the software are most likely to be subject to change, and which are not?

Software responsibilities

The responsibilities of software refer to the idea that each component of the software has its own role and purpose, from the high-level structure of the application to the smallest building blocks, such as classes and functions. High-level responsibilities may be more abstract in nature: for example, the responsibility of a single software component could be to authenticate a user into the ticket store. At a lower level, an individual function might be responsible for validating a user’s authentication token. In this sense, as mentioned earlier, even the details of the code are part of the architecture.

An essential principle regarding responsibilities is the so-called single-responsibility principle. This means that each software component, whether high-level or low-level, should have only one reason to change. This principle is also expressed as separation of concerns.

These principles allow software components to remain simple and easier to maintain as separate units. This makes them easier to test, and changes to one component are less likely to cause unexpected ripple effects in others. Similarly, following these guidelines ensures that a single change in requirements does not trigger the need for changes in countless places within the code.

Defining interfaces

Clearly defined responsibilities help in establishing equally clear interfaces for the application’s components. The purpose of interfaces is to ensure that the software components can interact with each other in a predictable and agreed-upon manner. This also allows each component to be encapsulated as a separate entity, meaning that its internal details are hidden behind public interfaces. As a result, software components can be treated as abstractions through their interfaces, without needing to worry about the underlying implementation

It’s important to note that in the context of architecture, interfaces do not refer only to web interfaces like REST or GraphQL, but to any interface between components that is independent of the implementation method. An interface also includes the software’s interaction with the outside world.

In our ticketing system example, one high-level interface could be the ticket purchasing interface. This could be implemented as a REST interface that the client application connects to from the user’s browser. On a lower level, an interface in the code might involve an object of type TicketInventoryInterface, which provides methods to check the availability of certain types of tickets and update their availability status. It also encapsulates the ticket inventory and its implementation details.

Well-defined interfaces and the use of encapsulation offer many advantages. They make it easier to develop software components as independent units and to handle them modularly. In this way, components are also simpler to test, both individually and together.

Decoupling components

When a program is designed so that each component’s responsibilities are carefully considered and their details are encapsulated behind well-designed interfaces, code changes required by evolving requirements do not spread like a chain reaction from one component to another. This is because the components are not tightly coupled to each other. This process of making software components independent of each other is also known as decoupling. It has several practical benefits.

For example, when implementing a web ticket store, one of the key technical decisions is selecting the right database. The database must handle large volumes of ticket data and efficiently manage fast queries and writes during ticket purchases. However, this decision is far from simple, as the true strengths and weaknesses of different database technologies—such as performance and scalability—can often only be fully understood through hands-on experimentation and practical experience.

If scalability is prioritized by choosing a highly scalable database, it may later prove to be an expensive solution or challenging to work with when implementing complex queries. Conversely, a traditional SQL-based relational database may struggle to efficiently manage large, globally distributed datasets.

Even with good planning and preliminary testing, it is difficult to perfectly predict future challenges, so locking into a specific technology can turn out to be a costly decision. However, since the database forms a highly integral part of the entire application, what’s the solution?

One of the most important principles for decoupling interfaces and components is so-called dependency inversion. According to this rule, high-level software code should never depend on low-level implementations. Instead, it should handle them through abstractions and their interfaces.

In our case, the database is considered a low-level implementation. Therefore, the higher-level application logic, such as the service layer, should remain independent of the specific database and the libraries or frameworks that interact with it. This ensures that database dependencies are isolated and do not permeate the rest of the codebase, making it easier to modify or replace the database implementation during development.

A common approach to implementing the dependency inversion principle is through a plugin architecture. Instead of high-level code being directly dependent on low-level implementations, it interacts with them via well-defined interfaces. Below is a hypothetical and simplified example of how a web ticket store could utilize this approach:

 

// TicketRepositoryInterface.ts

export interface TicketRepositoryInterface {

    getAvailableTickets(eventId: string): Promise<number>;

    reduceTicketCount(eventId: string, quantity: number): Promise<void>;

}


// TicketPurchaseService.ts

import { TicketRepositoryInterface } from './TicketRepositoryInterface';


export class TicketPurchaseService {

    private ticketRepository: TicketRepositoryInterface;


    constructor(ticketRepository: TicketRepositoryInterface) {

        this.ticketRepository = ticketRepository;

    }


    async purchaseTicket(eventId: string, quantity: number): Promise<void> {

        const tickets = await this.ticketRepository.getAvailableTickets(eventId);

        if (tickets < quantity) {

            throw new Error(`Not enough tickets available.`);

        }


        await this.ticketRepository.reduceTicketCount(eventId, quantity);

    }

}

 

In the example above, TicketPurchaseService is not directly dependent on low-level database code but rather on the TicketRepositoryInterface. Any TicketRepository object that connects to the low-level database can implement this interface. Therefore, the underlying implementation can be easily modified or even replaced entirely.

The code also makes use of the so-called dependency injection method: the object implementing the TicketRepositoryInterface is not instantiated directly within the TicketPurchaseService class. Instead, this dependency is passed to it through the constructor. This applies the principle of loose coupling.

The code highlights why the choice of technology (in this case, the database) was not the most critical aspect from a software architecture perspective. A far more impactful decision is how the software’s dependencies on the database-handling code are managed, rather than the specific technology itself.

Anticipating Susceptibility to Change

Preparing for change would be simple if following all of the above principles were entirely straightforward. However, it is often difficult to fully understand in advance how responsibilities within the software should be divided and where the interfaces should lie.

Creating loosely coupled, well-thought-out interfaces between components is not free; it is often more time-consuming than establishing tight couplings. Therefore, it may not always be worth implementing every possible interface that could be developed. Even if time and resources allow for making the software endlessly flexible, the limitations of the developers’ foresight usually set a boundary.

Perhaps the most important aspect of anticipating changes is identifying which parts of the software are more prone to change and which are less likely to be altered. For example, in our previous analysis of the global web ticket store, we identified that the database poses a risk of change, and its impact could be significant if dependencies are not carefully managed.

Other change-prone parts of the ticket store software are also easy to identify. For instance, it’s wise to anticipate that not all relevant user roles or use cases for the application may have been fully identified, meaning that new user roles might need to be added. The application should therefore be designed in such a way that implementing new user roles is straightforward. Likewise, over time, there may be a need to support different payment methods, ticket types, event types, pricing models, and marketing campaigns. Different countries and regulations could also introduce changes in language support requirements, ticket resale practices, and tax calculations. Naturally, the application’s user interface is also highly susceptible to change.

On the other hand, the core logic behind ticket purchasing and managing ticket availability is likely to remain stable if designed well. These processes represent the fundamental business operations of the ticket store, and they are less influenced by factors such as geographic location or individual stakeholders. However, even these areas are not completely immune to change; for instance, the purchasing process might evolve by introducing features like targeted offers, personalized recommendations, multi-event ticket bundles, installment payment options, and more. Nevertheless, the basic process where the user:

  1. selects an event,
  2. chooses a ticket type and adds it to the cart,
  3. pays for the contents of the cart,
  4. receives a confirmation of the payment, is likely to remain largely the same.

Anticipating change helps the software development team better understand the responsibilities within the software and identify where the boundaries of its interfaces should be drawn. It also clarifies the direction in which component dependencies should flow—specifically, which aspects of the software represent core business logic, upon which the implementation details should depend. Furthermore, it aids in prioritizing development tasks. In development, it makes sense to focus first on the core business logic and isolate the less critical, more change-prone components behind interfaces, which can then be implemented in detail later on.

Architecture Examples

In this blog post, we haven’t focused much on individual architectural models. This is because the fundamental ideas underlying practically every sensible architectural model are largely the same as those presented above. Dividing responsibilities, identifying and defining interfaces, encapsulation and abstraction, decoupling tightly bound components, and inverting dependencies, so that implementation details depend on the core business logic of the application, are at the heart of all software architecture.

However, to illustrate these principles, two architectural models are presented below: Alistair Cockburn’s Hexagonal Architecture and Robert C. Martin’s Clean Architecture. Notable similarities can be observed between the two.

An example of Alistair Cockburn’s Hexagonal Architecture.

In the Hexagonal Architecture model, the key concepts are the so-called ports and adapters. The core business logic of the application, which is the most likely to remain stable, connects to change-prone implementation details, such as the database and UI, through ports. Ports essentially define the interface for interaction between the software and the outside world. Adapters, on the other hand, are the concrete implementations of the interfaces defined by the ports. They manage input from the outside world and produce output from the software.

It’s important to note that the hexagonal shape itself does not carry any conceptual significance in relation to Cockburn’s model. The architecture is referred to as hexagonal simply because it is traditionally represented within a hexagon.

Clean Architecture according to Robert C. Martin

In Robert C. Martin’s Clean Architecture model, the architecture is divided into four layers: entities, use cases, interface adapters, and external systems. The idea of the model is that the farther a layer is from the core, the more prone it is to changes, and from the perspective of the application’s architecture, it should be treated more as an implementation detail. The dependencies between the layers flow inward: the outer layer is always dependent on the inner layer.

Entities represent the core business logic and central data models of the application. The use case layer, on the other hand, handles the application logic that implements the software’s use cases. Interface adapters, such as web request controllers, UI views, and the database query layer, transform the data into a format that external systems, like client software, browsers, or databases, can understand. The interaction with external systems is managed by various software frameworks and drivers.

Notably, in the Clean Architecture model, the relationship between the use case layer and the interface adapter layer follows a similar ports-and-adapters logic as in the Hexagonal Architecture model. The interface adapters correspond to the adapters in the hexagonal model, while the use case layer interacts with them solely through defined interfaces, or ports

 

Why Architecture Doesn’t Solve Everything

At Buutti, after hundreds of software projects, we have observed that the difference between a successful and a failed project often lies in how well the architecture is able to meet the software’s requirements and prepare for the changes in requirements during (and after) the development process. However, architecture alone is not the key to everything.

No matter how well an architecture is designed, it becomes ineffective if it’s not adhered to or if knowledge of it doesn’t permeate the entire team. Similarly, if the development team does not receive up-to-date and clear information about the software’s requirements, it is difficult to design the architecture according to those requirements, let alone anticipate their changes. “

This is why the development team should be engaged early in the requirements definition process, beginning with the high-level discussions. Likewise, if relevant stakeholders—especially end users—are not consulted regularly and early on during the requirements phase, even the best architecture cannot address the real needs of the application.

A fundamental prerequisite for effective software architecture design is clear and timely communication within the organization. Information must flow seamlessly from stakeholders defining the requirements to those responsible for the business, and ultimately to the software development team. For a software project to truly succeed, technical personnel cannot be treated as a mere execution layer, disconnected from the business. To stay fully informed about the requirements, they must be actively involved in discussions surrounding the application’s business goals!