Microservices Series #5: Beware best practices

This article is the fifth in a series of posts by Richard Rodger on the subject of the microservices architecture. It first appeared on NodeCrunch, the nearForm blog.


Catch up on the first four articles in the Microservices Series here:

This post will take a look at best practices and ask whether that concept is really “best.”

Enterprise software developers are often exhorted or forced to adopt what are termed “best practices.” The aim of best practices is to enable us to cope with the fragility of the monolith in a more pragmatic and ethical manner. They are intended to ensure that no matter what the quality of the team or the scale of the problem, the project will always be successful.

Within the best-practice paradigm, project failure is regarded simply as the inadequate application of best practices. Thus, best practices claim to provide predictive power; the better you execute them, the higher your chance of success.

I would like to challenge best practices on the accuracy of their predictions. While there may be some correlation between the intensity of activity on a given best practice and the ultimate outcome of a project, the correlation must be sufficiently strong to justify the effort. Further, we can ask if a given best practice has inherent limits, meaning that no amount of extra effort will improve the chance of success.

Unit tests … do not deliver quite as much value as you would think, especially when it comes to custom enterprise software.

Unit tests

Unit tests are a core best practice of software development. Few teams attempt to build systems without them. And yet they do not deliver quite as much value as you would think, especially when it comes to custom enterprise software.

Categories of code

There are two broad categories of code in the enterprise world: business logic and infrastructure components.

  • Business logic – code that executes and applies the business rules of the company. Business logic does the following:
    • Provides shopping carts, sales taxes, and invoicing
    • Updates the database
    • Delivers content, but only to those who should have access
    • Defines integration interfaces
    • Controls business processes
    • Is the code closest to the real work of the company. Its requirements are hard to capture accurately, constantly changing, and may not even be fully known by anyone.
  • Infrastructure components – code that supports the business logic. Infrastructure components do the following:
    • Provide database drivers
    • Execute efficient algorithms
    • Parse network traffic
    • Route messages
    • Build user interfaces

Infrastructure components are necessary, but distant from the business needs of the company. They are easy to specify, and must often follow public standards. Once complete, they change slowly and predictably. Full correctness can often be achieved with programming ability alone.

One of these categories of code benefits greatly from unit tests; the other, not so much.

Infrastructure components are stable enough so that high or even full unit test coverage is a net positive. When well written, they do not often require refactoring, and their interfaces tend to remain stable and well defined. In this case, unit tests are an investment that keeps paying dividends. They allow you to change (or even refactor) module internals without too much pain. Unit tests can verify the level of adherence to external specifications. They can even measure code behavior to a certain extent.

This effectiveness does not carry over to business logic. Business logic code is different. It is subject to the vagaries of bad specification and constantly changing requirements.

Good tests and high coverage give a false sense of security.

False sense of security

We have collectively allowed ourselves to be seduced by the effectiveness of unit testing infrastructure modules. As a result, we end up with some nasty diseases. High test coverage has exponentially high costs. As requirements change, implementation code must change, and therefore the tests must change. The closer the tests mirror the structure of the implementation code — which with high coverage, they must — the more the tests are invalidated by changes and must be rewritten. The tests themselves cannot test the business requirements driving the implementation because they are operating at the code level of individual components, not at the level of business processes.

Thus, good tests and high coverage give a false sense of security. This is often manifested by failure to deliver business value. Too much time was spent perfecting and extending test coverage, and not enough time ensuring that the right features were built. Many a system with good test coverage turned out to have significant performance issues in production.

Perhaps the worst sin of unit testing is the wasted effort that arises from lazy or simplistic enforcement of coverage standards. If the entire code base of a project is required to have high coverage, say 80% or 90%, then all code, irrespective of relevance to business value, receives the same attention and effort. Rather than focusing on the 20% of the code that delivers 80% of the value*, the testing effort of the team is diffused over the entire code base. Low-impact code is delivered with unnecessarily high quality, and high-impact code that should have 100% coverage is under-tested.

*This is an instance of the Pareto Principle, which is short-hand for a kind of probability distribution that reflects an underlying imbalance between causes and effects. It describes systems where small subsets of causes have very large effects. This is a common feature of code bases, as anyone who has conducted performance profiling can attest.

Thus, a code base with consistent high coverage predicts success far less strongly than it might appear at first. One hundred percent coverage is not an indicator of 100% correctness; it is perfectly possible to hit 100% coverage without verifying results fully. It is perfectly possible to hit 100% coverage on a code base that simply cannot scale because it has fundamental algorithmic flaws.

The predictive power of unit tests against business logic code is simply not that strong, and is limited by the fact that merely executing every line of code does not tell you much about correctness or performance.

The danger of code-oriented best practices

The best practices for building code are limited only by the imaginations of developers. Steve McConnell’s classic book Code Complete contains several hundred pages’ worth. They are all great ideas. None of them will save your project from under-specified requirements or the need to quickly address the launch of a competitor’s new product.

Best practices at the code level — such as good naming, consistent code layout, and avoiding complex boolean expressions — are simply hygiene factors. Not doing them certainly means you are no engineer. Doing them only opens the door. At best, they make it easier to deal with the inevitable need to refactor when your code structure can’t meet requirements.

The real world is fuzzy and chaotic. Creating ever more complex abstract structures to deal with ever more emerging complexity is a losing game.

Code-oriented best practices at the design level are even more dangerous in giving you a false sense of security. The principles of object-oriented (OO) design — the use of object design patterns, the rigorous classical data structures — are all predicated on a mistaken belief. The business world does not reduce cleanly to mathematical set theory. It is not possible to define entities in terms of a fixed set of attributes and relationships. The real world is fuzzy and chaotic. Creating ever more complex abstract structures to deal with ever more emerging complexity is a losing game.

The real world vs. code

Consider physical addresses. There are vast numbers of website forms demanding a fixed set of address fields. Most of these address forms assume that every address has a postal code, and insist upon it as a required field. This assumption is false; over one-third of the countries in the Universal Postal Union have no postal codes. Extra logic is needed to deal with this case. Nor is the list of countries static; for example, Ireland introduced postal codes in 2015.

Next, consider personal names. Many web site forms  demand a first name, a middle initial and a last name. This structure is unsuitable for many cultures (so much so that the international web standards body, the World Wide Web Consortium (W3C), has a detailed article on the topic). The ordering of first and last name can be inverted for different cultures, affecting sorting behavior. In Iceland, it makes no sense to sort by last name, as this is just the name of your parent. Icelandic phone books are sorted by first name. In Spain, you have two last names, one from each parent. Middle initials are used far less frequently in English speaking countries outside North America. Many non-western cultures use a single name for a person, and last names are uncommon. As your company expands its business internationally, you will find that your data structures, sorting algorithms, and validation rules need constant refactoring.

Observe the vast effort required to implement software systems that deal with taxation or other laws. Monitoring risk exposure in the banking system depends on the ability of trading systems to fully model the complexities of derivative contracts. Traders constantly innovate on the terms of the contracts, thus invalidating the models.

We can see from these examples that most concepts from the real world do not submit easily to the strict classification structures of programming languages.

The evolution of OO programming demonstrates just such a failure to accurately model the world. At first, OO was hailed as a clean conceptual model for the real world. Classes would correspond to properties and behaviors of actual things. Inheritance could be used to handle specialization. The Staff class has Employee and Contractor sub-classes. Some Employees are Managers, so sub-class again. Until one day a Contractor is hired as a Manager. Oh dear. Multiple inheritance or interfaces? Surely one of these is the right approach?

It is telling that most features of OO languages needs to be supplemented by a whole host of design patterns before professional software can be written. Entire books of design patterns are needed. Special skills of analysis and design are required to choose the class structure so that it is sufficiently future-proof. Doesn’t this feel like an increasingly losing battle to add more and more complex refinement?* More importantly, technical debt tends to degrade the soundness of any abstract structure over time, so that the internal coherence of the initial structure, especially in the context of business logic code, is a weak predictor of project success.

*The dominant theory of planetary motion until the end of the 16th century and the insights of Galileo was the Ptolemaic theory of epicycles, where the planets were said to move in hierarchies of circles upon circles. Discrepancies could be corrected by adding ever more circles. Object-oriented systems suffer from the same dynamic, in that ever-smaller gains in model accuracy require ever-larger amounts of complexity.

In the world of software, our lack of understanding and naive confusion have made us superstitious and political.

The human side

The best practices on the human side of software development are equally suspect. We can observe that civil and mechanical engineering do not seem to require such endless introspection on the best approach to project management. Systematic, common-sense project management does work. There are any number of such civil engineering projects in China alone to put the case to rest. And when traditional engineering does fail to deliver, human error is seen as the primary cause, rather than shifting blame to the principles of project management.

In the world of software, our lack of understanding and naive confusion have made us superstitious and political. There is little hard evidence to back up the use of many of our cherished best practices. Projects fail despite daily stand-ups. Projects fail despite pair programming. Projects fail despite zero bug counts. Projects fail despite formal (but Agile) processes like SCRUM.

In many cases, these best practices are imposed on the team. And often without malice, the road to hell is being paved with the best of intentions by those who believe they are acting professionally. The fact that there are large numbers of cases where individual practices have been strictly enforced is the key to understanding how weakly predictive they are. If, in reality, a given practice was strongly predictive of project success, it would be rapidly adopted. Good ideas with obvious power spread rapidly*. Even bad execution would deliver good results. There has been sufficient sampling of so many software project practices that clear winners should have emerged by now. Even the strongest contender, unit testing, is only weakly predictive.

Continuing to search for software development nirvana in ever more intricate variants of Agile, or even in reactionary approaches, is clearly not working.

Conclusion

Continuing to search for software development nirvana in ever more intricate variants of Agile, or even in reactionary approaches, is clearly not working.

*Unit testing is a good example of a good idea spreading quickly. It’s just a pity that it is not a more powerful predictor of project success.

For more information about microservices, check out the nearForm webinar “The Tao of Microservices.”


Richard Rodger is the co-founder and technology thought leader behind nearForm. He is the creator of Seneca.js, an open source microservices toolkit for Node.js, and nodezoo.com, a search engine for Node.js modules. Richard’s first book, Beginning Mobile Application Development in the Cloud (Wiley, 2011), was one of the first major works on Node.js. His new book, The Tao of Microservices (Manning), is due out in 2017. Contact Richard on Twitter.

Previous

Why you should limit JavaScript — and how to do it

Is your team ready for resource management software?

Next

1 thought on “Microservices Series #5: Beware best practices”

  1. Software development can be tricky and sometimes frustrating but when you get the idea and run some tests with the perfect coding you can be successful in creating another famous software in the market…

Comments are closed.