This article is the second in a series of posts by Richard Rodger on the subject of the microservices architecture. It first appeared on NodeCrunch, the nearForm blog.
In the first post of this series, we took a general look at microservices as a phenomenon and at how they can help us build better custom enterprise software. In today’s post, I take an initial deep dive into the issues that we face with our software, with the aim of giving you a good picture of the state of enterprise software at the moment.
We have identified microservices as a fundamental new approach that can radically improve the timeliness, quality and reusability of our code. Now what?
Gathering evidence
As per the previous post, we have admitted that we have a problem in custom enterprise software. We have identified microservices as a fundamental new approach that can radically improve the timeliness, quality and reusability of our code. Now what?
If we intend to propose microservices as a software engineering strategy for our organizations, we need to gather the evidence. Best practice here is to use the scientific approach: analyse accurate data and conduct long-term studies in order to draw conclusions and establish facts.
Here, we run into a snag. In our field, we don’t have good data. This is because long-term studies are very difficult to execute. Ideally, we would repeatedly gather a random sample of several hundred developers and randomly assign them to projects where we control variables such as scope, schedule and complexity. Over several years, we would collect a sufficient number of samples to build a reasonably predictive model. Clearly, such randomised controlled experiments would be utterly impractical and far too costly. It’s also very hard to isolate causation from correlation, as there are so many other factors in any given project.
(I should point out that there have been attempts to conduct stronger experiments into the nature of software development. The difficulty in doing so is clear from the confounding variables, not least the variable of human emotional states. Also, the majority of subjects in these experiments were undergraduates, with the result that task complexity and system size had to be much less than is typically the case in industrial projects.)
In the absence of high quality data from long-term studies, we must make do with with other, weaker techniques: analysis from first principles, qualitative inspection, and quantification of what we observe. Let’s take a look at the first of these.
Analysis from first principles
“Analysis from first principles” means deduction from basic facts to reach conclusions that not only offer insight, but also make accurate predictions. The analytical evidence yields the following four key points:
- Scale brings complexity.
- Entropy increases over time.
- Traditional set theory is not strong enough for business.
- Human cognitive biases hinder our ability to accurately estimate.
Let’s look at each of these in turn.
Scale
As soon as you have more than a few tens of parts … the trouble has already started.
Production software systems, especially those in large companies, are large. They have many moving parts. Consider a system with three parts. There are three possible interactions. Add a new part, making four in total. Now there are six possible interactions. In a system with n parts, the system has n(n – 1)/2 possible interactions. The graphic below illustrates how the complexity and scale of the system grow exponentially as n increases.
You may point out that real systems have far fewer part interactions, as it is almost never the case that every part talks to every other part. Ironically, this does not matter much. As soon as you have more than a few tens of parts, which means you have to scale in complexity to meet business requirements, the trouble has already started.
The failings of enterprise software development become more and more apparent as the scale of systems increases. Our traditional coping mechanisms — increasing team size, making people work evenings and weekends, holding more and more team meetings — are no longer enough. Communication difficulties — remember, it’s human beings we are talking about — quickly outweigh the benefits of additional team members. We have reached the social and intellectual limits of humans.
Entropy
The term “entropy” is an analogy from physics, where it means a measure of the thermodynamic disorder of a physical system. In software development, we use it as an analogy for what we more commonly call technical debt. Technical debt increases over time as a system is tweaked in order to accommodate new requirements.
Set theory
The math doesn’t work anymore.
Programming languages use a mathematical model for information. This model is grounded in set theory. Unfortunately, traditional set theory is weak when it comes to custom software development because it doesn’t take into account the human psychological factors behind business processes. Set theory is fixed and inflexible, whereas the human psychological motivations behind business processes make those processes inconsistent, ambiguous, arbitrary. What’s more, they are often governed by unwritten rules and contexts.
The impact is an increased level of complexity in our software, which is self-evident from first principles.
People
The final analytical argument in showing that software development is in crisis is that the human intellect is weak. This is because our minds suffer from more cognitive biases than we think. We are less rational than we like to believe. This inevitably affects the output of our work.
Here are some of the human biases that adversely affect software development:
- Availability heuristic: overestimating the likelihood of recent events recurring, simply because they are easy to recall. This manifests as a fixation on some failure modes at the expense of others, and the overestimation of the effect of given development practices.
- Base rate fallacy: weighting specific details more highly than general trends. “Our project will succeed because we have a super-star developer” ignores the fact that most software projects are not considered successful in retrospect.
- Confirmation bias: focusing on evidence that supports a prior belief, and ignoring evidence that does not. The typical symptom is a day spent bug chasing, only to fix the bug within 10 minutes of starting work the next morning.
- Dunning-Kruger effect: unskilled developers overestimate their abilities, because they don’t know what they don’t know.
- Planning fallacy: the time required to complete a future task is systematically underestimated. Also known as Hofstadter’s law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.
That’s all for this post. Next time, I’ll be taking a look at qualitative and quantitative evidence, and investigating how these can help us in our quest to make our enterprise software projects more successful. Until then, check out this webinar, which provides an introduction to microservices’ core concepts.
Richard Rodger is the co-founder and technology thought leader behind nearForm. He is the creator of Seneca.js, an open source microservices toolkit for Node.js, and nodezoo.com, a search engine for Node.js modules. Richard’s first book, Beginning Mobile Application Development in the Cloud (Wiley, 2011), was one of the first major works on Node.js. His new book, The Tao of Microservices (Manning), is due out in 2017. Contact Richard on Twitter.
Comments are closed.