October 11, 2017
Part 2: Optimizing and scaling microservices. Organic growth of eco-systems.
A microservices approach is not a silver bullet for all software architecture problems. It introduces tradeoffs and challenges of its own. However, process gains and improvements in human performance have been considered to be worth the overhead in technology.
Here are some general arguments against using sophisticated SOA.
Server performance and overhead in communication
By encapsulating small modules and introducing communication on a higher service level, we introduce extra layers of abstraction, translation, security, parsing and validation to computation.
Common ways to address the issue are:
– Run microservices as separate applications, but host them together, where connectivity is faster.
– Use lower-level, efficient protocols.
– Compress message bodies to reduce network transfer time.
– Send messages un-compressed to save parsing time on target.
– Use flows which require less communication. For example, Token Authentication with public keys is much faster than OAuth 2.
While at first glance the architecture introduces more server workload, it in fact provides more opportunity for scaling resources than do monoliths. The granularity of the designed system allows each service to be scaled independently.
– Segment databases (by region, by business segment, by unit, etc.).
– Mirror services, which do not have a database (load-balanced without any extra complexity).
– Mirror frequently used common-data services, together with their database.
As mentioned in the first part of this article, the individually developed and running services can benefit by having the best technology stack chosen for the particular function. Make sure that each service is implemented using optimal technology.
A flexible system with loosely coupled modules requires less pre-planning in terms of scale.
Traditionally, there are three axis of execution scalability with a computer system:
– functional decomposition;
– data segmentation; and
– replication of resources.
All these come naturally to microservices.
Microservices follow similar logic to the design principles of The Selfish Class, and each service is supposed to protect its internal workings independently.
If a security violation should occur, the safest thing to do is to minimize the possible damage and the extent of the intrusion. Compromising one service does not give direct access to any other service. Regardless of its internal implementation, no service should be granted more privileges or access than the absolute minimum.
Distributed points of failure and debugging across multiple services
Just as in the case of security, it is, indeed, more work to handle distributed points of failure. However, independent failure is good. Should we suspend access to our shopping cart service, because the in-website instant messaging feature crashed its database?
One solution may be to create a global logging service, where an execution ID is passed as a parameter to every service in the chain for the purpose of tracing. However, the goal in any SOA is to treat each service as black-box system, which is tested on its own. Therefore, as much debugging as possible should be handled by service-specific end-to-end tests.
There are multiple approaches and various degrees of employing automation when it comes to microservices.
Purists will advocate that services must live their own lives with no need to artificially join them together. This is a model of complete independency for a Functional-Driven-Development-style environment, with a-synchronous delivery of modules and multiple independent development teams.
However, in the early stages of a single-product team project, there is often a business drive to have single click global releases.
Case study 1
The large international enterprise company.
|System||It is a CRM system, designed to serve the organization globally. Various departments across the world work with the same customers, but in different segments of the industry and with completely different business processes and services. Data is extremely sensitive and is accessed/shared based on sales territory, sub-industry segmentation, specific department relationships, exceptional agreements with customers or departments and individual employee judgement. A number of external systems enrich or use the data, but this is also based on business segmentation and each segment has different data residency and exposure regulations.|
The project is a multi-million dollar undertaking for a global CRM system set in a conservative international organization. SOA principles are poorly understood and slowly adopted in the global environment. Overnight data-loads are widely used in the organization.
|Technology Teams||From highly skilled specialist programmers of external implementation service company, to good subject-matter experts and resident generalist analyst-programmers, working together, but spread across 3 continents.|
|Strategy||Multiple instances of the CRM system environment for each business and regional segment that distributes common functionality in an application-plugin fashion across each instance. Ability for each instance to customize or build own functionality on top of the common behavior. Data-entity-level subscription service with real-time communication. Distributed validation.|
|Tactics||Proxy and adaptor microservices as facades to integrate with legacy systems. Modular development of completely independent deployable components. All interfaces are defined as public APIs and ensure that each delivery team can operate independently.|
A case study 2
The small technology product company.
|System||Multi-tenant cloud based Inventory Management Software. Must be versatile enough to cover most markets, regions and categories of customers. Architecture must allow quick design-to-production cycles; hundreds of millions page loads per day; exceptional robustness and cardinal changes in functionality; easy branching of functionality|
|Landscape||Startup culture. While many of the employees have very different backgrounds and levels of maturity in SD, all are professionals and open to new approaches.|
|Technology Teams||Hand-picked top-end specialist programmers. The company policy is to invest in quality rather than quantity. All work in the same location.|
|Strategy||Build own multi-tenancy platform based on microservices. Make all APIs public and allow alternative functionality modules to be mix-and-match per region. Allow regions or customer band segments to overwrite/replace any microservice module from a set of offered options or own custom implementation based on the public interface.|
|Tactics||Compose feature teams for each micro service (1-4 people) and allow a-synchronous development lifecycles. Technical-documentation-first approach to discover and resolve dependencies between modules, before implementation begins. Give developers autonomy of the internal implementation of each service.|
Successes, failures and takeaways
Moving too fast
In case 1
The external teams of specialist programmers are affected more and their productivity increases rapidly. The internal enterprise team improves less in terms of efficiency. The increase in this performance gap has a demoralizing effect and makes the scheduling of dependencies difficult.
The increased velocity with some teams encourages business owners to neglect requirements analysis.
Legacy enterprise systems take far too long to adapt. Complicated facades need to be built in some cases as throw-away code, to allow the rest of the services to move forward.
In case 2
The performance of the development team and quality of implementation are exceptional. The result is rapid delivery with unusually high-quality product and maintainable pace.
With well decomposed and encapsulated modules, it is possible to do major architecture re-design and fundamentally change the flows, at any point. Adaptivity is achieved through by keeping technical-debt-free and architecture-evident build at all stages.
Adaptive SD and FDD and DDD agile principles closely match the philosophy of microservices. All three recognize the importance of discipline and adaptive planning up-front.
Find champions and objective means of comparison
People will feel challenged over their contribution to previous practices or insecure due to lack of experience. In both case studies above, there are arguments which challenge the most basic principles of software engineering in order to dismiss the new method.
In Case 1
Even after a proven track record of successful implementation, services deteriorate to a close-coupled system, unless rigorously protected. Engineers need confirmation of the new counter-intuitive approach and approval before they accept the mindset.
In Case 2
Challenges do not derive from technology. It is the business that struggles to adapt to the management of independently working micro-teams.
Simplifying management by reverting to a single-functional-pipeline team and monolith system reduces productivity nearly 70%. The team reverts to original decoupling. Engineers themselves become champions of the complicated architecture, which simplifies their work by introducing boundaries of responsibility.
Don’t try to build it all at once
One of the factors making microservices development especially agile is the granularity of the services. This allows short implementation cycles or complete re-write of a self-contained module. Another sought-after learning benefit is the fail-fast, fail-often cycle paradigm.
Starting small by implementing few key services at a time, will not provide the ability to demonstrate end-to-end user journeys, but will instead provide good measures of progress, challenges and allow for complete testing, module by module. The completed modules will be really done, without any further pending work.
Building a few complete services will provide the team with confidence, as well as a sense of how coming functionalities can be best approached. This will create champions, who may also work to educate engineers and business owners alike.
Protect your architecture
It is easy to be involved in tactical matters and unwittingly drift into entropy, which is the archenemy of every architecture. Entropy (Mud) is far less harmful to a monolith than it is to a microservice eco-system.
If the objective is a lightning-fast build and then a re-write, there are simpler and and more effective ways to do that so that the technical debt will not accumulate beyond control.
It is the architect’s responsibility to ensure that the system remains flexible to changes and quick to produce results, while the entropy is kept in manageable levels and the design is evident in the physical implementation.
In case 1
Multiple stakeholders and shared ownership of the architecture quickly leads to feature-creep, and corruption of the integrity of various services and branches of the entire eco-system.
In case 2
Module ownership by different team members limits feature creep. When a small number of programmers feel direct responsibility for a micro-service module, they protect its scope and integrity.
Here are some suggestions to prevent entropy from spreading across services.
– Keep corruption isolated and plan for recovery: stubs, facades, reversion of control, throw-aways, etc.
– Education – design principles are broken as result of ignorance or force of habit.
– Plan to roll back the affected members of the ecosystem to their previous versions, if needed.
– Segment the API with namespaces, so that other services can use a segment independently. That will eventually allow for a large service to be split.
– If cross-dependencies are forced between services, join them. Keep encapsulation on the code-level as much as possible. That would leave the door open for refactoring.
The first part of the article is here: https://corevalue.net/part-1-how-to-take-advantage-of-micro-services-architecture/