Microservices – What’s All The Fuss About?

I’ve written about containers in a previous blog post.

Recently I’ve been delving more into their cousin – Microservices. Microservices are one of the latest buzzwords in cloud computing, but how are they different from containers? Well, containers are a newish alternative to Virtual Machines – both are entities or buckets to run your code in. Virtual Machines abstract the hardware and run an operating system and everything above it. Containers abstract the operating system interface and run everything above it including runtime libraries in a “container engine”. Containers are typically leaner and faster to create, but provide less isolation than virtual machines.

Microservices represent something much broader; a new application architecture that is in many ways a new and improved advance over the old Service-Oriented-Architecture (SOA) of the late 90s. Microservices are a way to break apart a large monolithic application into a set of small, discrete processes and facilitate both independent development and scale-out for each independent function. Microservice architectures utilize containers as efficient, decoupled execution engines for each service.

Microservices Architecture vs. Traditional Client/Server (image courtesy of Azure/MSDN)

Microservices Architecture vs. Traditional Client/Server (image courtesy of Azure/MSDN)

A Microservice exposes a carefully constructed, usually REST-ful API that may use either a classic request/response interface or an event driven publish/subscribe model. These APIs are language agnostic, imposing no limitations on the implementation choices of their callers. Microservices tend to be peers with each other, serve a single function and communicate with each other over a message bus. Microservice APIs are versioned to enable each service to evolve independently and callers of a Microservice expect an asynchronous response to their requests. A container is frequently used to run each individual Microservice and the rapid creation of such a container is what enables the scale-out property for the service. As few or as many containers needed for each Microservice can be dynamically created or destroyed based on load. This also implies a load balancing mechanism at each Microservice API boundary and also requires careful attention to function timeouts to avoid cascaded failure scenarios when an individual service instance has a problem. Adrian Cockroft, former Netflix architect & Microservices pioneer, covers the cascading timeout problem thoroughly in his ACM talk.

Why will Microservices succeed where SOA largely failed? In the SOA era, components tended to be large, stateful and encompass many features and functions at each level. Processors and networks at the time weren’t fast enough for a fully distributed architecture with independent execution contexts, hence the services weren’t fine-grained enough to isolate functionality into single function/single address space services. The result was a collection of complex, deeply interrelated components that didn’t bring that much of an improvement in efficient software development and maintainability. Today, we have fast enough hardware for fine-grained distributed Microservices and we have rapidly provisioned containers to run each service instance in its own isolated address space. By running each Microservice instance in its own container, each service has its own unique execution context and the overall system is implemented as a collection of isolated and independent services communicating with each other over a message bus.

There are multiple practical code development and maintenance benefits to a Microservices Architecture. My old college buddy and Quantifi Chief Architect Marc Adler makes a strong, practical business case for this in his excellent talk Microservices: The New Building Block for Financial Technology.  By executing each Microservice in a separate container and address space, stateful data sharing between Microservices becomes cumbersome and requires explicit circumvention of the API rules. As such it is naturally avoided. When different programs or services operate on the same data, dependencies become complex and the code can become unmaintainable quickly. When applications duplicate the same logic for common functions, whenever the code needs to be changed either to fix a bug or add a feature, it has to be updated in numerous places owned by different teams. By making such a shared function a Microservice, it can be owned by one small team and kept in one place. When these architectural concepts are followed, with no shared state between services, development can proceed at different rates independently. Another benefit of this approach is that each service can adhere to its own unique security requirements based on the sensitivity of its data and operations. With the Microservices approach, there is no need to enforce one size fits all security requirements over a large, complex system.

In summary, successful Microservices Architectures must be loosely coupled (each service can be updated independently of any other), service oriented (it provides a service using a well formed API with version control) and each service has bounded context (It’s isolated from other Microservices and you don’t have to know anything about other related services to use it.) Each service in a Microservices-based Architecture is dedicated to performing one function.

So called Serverless Computing takes these concepts one step further. Not only are Microservice containers ephemeral, but with Serverless Computing each API call to a Microservice actually triggers the creation of a new service instance for just that call. The most prominent example of this is Amazon Web Service’s Lambda, which creates a new instance based on an API trigger and only charges you for the new instance once its spun up. Unlike Microservices, Serverless Computing lends itself to single threaded, single-use functions rather than longer-lived, multi-threaded Microservices.

 

Posted in Cloud | Leave a comment