The Service Mesh Landscape Right Now

A service mesh overlaps, complements, and in some cases, replaces many tools that are commonly used to manage microservices. Last year was all about evaluating and trying out service meshes. But while curiosity about service mesh is still at a peak, enterprises are already in the evaluation and adoption process.  

The capabilities service mesh can add to ease managing microservices applications at runtime are clearly exciting to early adopters and companies evaluating service mesh. Conversations tell us that many enterprises are already using microservices and service mesh, and many others are planning to deploy in the next six months. 

Understanding Service Mesh

The Origin of Service Mesh

In the beginning, we had packets and packet-switched networks. 

Everyone on the Internet — all 30 of them — used packets to build addressing, session establishment/teardown. And then, they’d need a retransmission scheme. Then, they’d build an ordered byte stream out of it. 

Eventually, they realized they had all built the same thing. The RFCs for IP and TCP standardized this, operating systems provided a TCP/IP stack, so no application ever had to turn a best-effort packet network into a reliable byte stream. 

We took our reliable byte streams and used them to make applications. Turns out that a lot of those applications had common patterns again — they requested things from servers, and then got responses. So, we separated these request/responses into metadata (headers) and body. 

HTTP standardized the most widely deployed request/response protocol. Same story. App developers don’t have to implement the mechanics of requests and responses. They can focus on the app on top.  

There’s a newer set of functionality that you need to build a reliable microservices application. Service discovery, versioning, zero-trust…. all the stuff popularized by the Netflix architecture, by 12-factor apps, etc. We see the same thing happening again – an emerging set of best practices that you have to build into each microservice to be successful.  

So, service mesh is about putting all that functionality again into a layer, just like HTTP, TCP, packets, that’s underneath your code, but creating a network for services rather than bytes. 

Questions? Let’s start with the most basic one: What exactly is a service mesh? 

What is a Service Mesh?

"A service mesh is a transparent infrastructure layer that site between your network and application, helping you to manage communication between your microservices."

A service mesh is designed to handle a high volume of service-to-service communications using application programming interfaces (APIs). A service mesh ensures that communication among containerized application services is fast, reliable and secure. 

The mesh provides critical capabilities including service discovery, load balancing, encryption, observability, traceability, authentication and authorization, and the ability to control policy and configuration in your Kubernetes clusters. 

Service mesh helps address many of the challenges that arise when your application is being consumed by the end user. Being able to monitor what services are communicating with each other, if those communications are secure, and being able to control the service-to-service communication in your clusters are key to ensuring applications are running securely and resiliently. 

More Efficiently Managing Microservices

The self-contained, ephemeral nature of microservices comes with some serious upside, but keeping track of every single one is a challenge — especially when trying to figure out how the rest are affected when a single microservice goes down. The end result is that if you’re operating or developing in a microservices architecture, there’s a good chance part of your days are spent wondering what the hell your services are up to. 

With the adoption of microservices, problems also emerge due to the sheer number of services that exist in large systems. Problems like security, load balancing, monitoring and rate limiting that had to be solved once for a monolith, now have to be handled separately for each service. Service mesh helps address many of these challenges so engineering teams, and businesses, can deliver applications more quickly and securely. 

Why You Might Care

If you’re reading this, you’re probably responsible for making sure that you and your end users get the most out of your applications and services. In order to do that, you need to have the right kind of access, security and support. That’s probably why you started down the microservices path.

If that’s true, then you’ve probably realized that microservices come with their own unique challenges, such as: 

  1. Increased surface area that can be attacked
  2. Polyglot challenges
  3. Controlling access for distributed teams developing on a single application 

That’s where a service mesh comes in. 

A service mesh is an infrastructure layer for microservices applications that can help reduce the complexity of managing microservices and deployments by handling infrastructure service communication quickly, securely and reliably.  

Service meshes are great at solving operational challenges and issues when running containers and microservices because they provide a uniform way to secure, connect and monitor microservices.  

Here’s the point: a good service mesh keeps your company’s services running they way they should. A service mesh designed for the enterprise, like Aspen Mesh, gives you all the observability, security and traffic management you need — plus access to engineering and support, so you can focus on adding the most value to your business. 

And that is good news for DevOps. 

The Rise of DevOps - and How Service Mesh is Enabling It 

It’s happening, and it’s happening fast. 

Companies are transforming internal orgs and product architectures along a new axis of performance. They’re finding more value in iterations, efficiency and incremental scaling, forcing them to adopt DevOps methodologies. This focus on time-to-market is driving some of the most cutting-edge infrastructure technology that we have ever seen. Technologies like containers and Kubernetes, and a focus on stable, consistent and open APIs allow small teams to make amazing progress and move at the speeds they require. These technologies have reduced the friction and time to market. 

The adoption of these technologies isn’t perfect, and as companies deploy them at scale, they realize that they have inadvertently increased complexity and de-centralized ownership and control. In many cases, it’s challenging to understand the entire system. 

A service mesh enables DevOps teams by helping manage this complexity. It provides autonomy and freedom for development teams through a stable and scalable platform, while simultaneously providing a way for platform teams to enforce security, policy and compliance standards.  

This empowers your development teams to make choices based on the problems they are solving rather than being concerned with the underlying infrastructure. Dev teams now have the freedom to deploy code without the fear of violating compliance or regulatory guidelines, and platform teams can put guardrails in place to ensure your applications are secure and resilient. 

What a service mesh provides

What do you want out of a service mesh?

If you’re like most people with a finger in the tech-world pie, you’ve heard of a service mesh. And now you know what a service mesh is. And now you’re wondering what it can solve for you. 

A service mesh is an infrastructure layer for microservices applications that can help reduce the complexity of managing microservices and deployments by handling infrastructure service communication quickly, securely and reliably. Service meshes are great at solving operational challenges and issues when running containers and microservices because they provide a uniform way to secure, connect and monitor microservices.  

A good service mesh keeps your company’s services running they way they should, giving you and your team access to the powerful tools that you need — plus access to engineering and support — so you can focus on adding the most value to your business. 

Want to learn more about this? Check out the free Complete Guide to Service Mesh here. 

Next, let’s dive into three key areas where a service mesh can really help: observability, security and operational control. 

Observability

Are you interested in taking your system monitoring a step further? A service mesh provides monitoring plus observability. While monitoring reports overall system health, observability focuses on highly granular insights into the behavior of systems along with rich context. 

Deep System Insights

Kubernetes seemed like the way to rapid iteration and quick development sprints, but the promise and the reality of managing containerized applications at scale are two very different things. 

Docker and Kubernetes enable you to more easily build and deploy apps. But it’s often difficult to understand how those apps are behaving once deployed. So, a service mesh provides tracing and telemetry metrics that make it easy to understand your system and quickly root cause any problems. 

 

An Intuitive UI

A service mesh is uniquely positioned to gather a trove of important data from your services. The sidecar approach places an Envoy sidecar next to every pod in your cluster, which then surfaces telemetry data up to the Istio control plane. This is great, but it also means a mesh will gather more data than is useful. The key is surfacing only the data you need to confirm the health and security status of your services. A good UI solves this problem, and it also lowers the bar on the engineering team, making it easier for more members of the team to understand and control the services in your organization’s architecture. 

Security

A service mesh provides security features aimed at securing the services inside your network and quickly identifying any compromising traffic entering your cluster. 

 

mTLS and Why it Matters

Securing microservices is hard. There are a multitude of tools that address microservices security, but service mesh is the most elegant solution for addressing encryption of on-the-wire traffic within the network.   

Service mesh provides defense with mutual TLS (mTLS) encryption of the traffic between your services. The mesh can automatically encrypt and decrypt requests and responses, removing that burden from the application developer. It can also improve performance by prioritizing the reuse of existing, persistent connections, reducing the need for the computationally expensive creation of new ones. With service mesh, you can secure traffic over the wire and also make strong identity-based authentication and authorizations for each microservice.

We see a lot of value in this for enterprise companies. With a good service mesh, you can see whether mTLS is enabled and working between each of your services and get immediate alerts if security status changes. 

Ingress & Egress Control

Service mesh adds a layer of security that allows you to monitor and address compromising traffic as it enters the mesh. Istio integrates with Kubernetes as an ingress controller and takes care of load balancing for ingress. This allows you to add a level of security at the perimeter with ingress rules. Egress control allows you to see and manage external services and control how your services interact with them. 

Operational Control

A service mesh allows security and platform teams to set the right macro controls to enforce access controls, while allowing developers to make customizations they need to move quickly within these guardrails. 

 

RBAC

A strong Role Based Access Control (RBAC) system is arguably one of the most critical requirements in large engineering organizations, since even the most secure system can be easily circumvented by overprivileged employees. Restricting privileged users to least privileges necessary to perform job responsibilities, ensuring access to systems are set to “deny all” by default, and ensuring proper documentation detailing roles and responsibilities are in place is one of the most critical security concerns in the enterprise. 

We’ve worked to solve this challenge by providing Istio Vet, which is designed to warn you of incorrect or incomplete configuration of your service mesh, and provide guidance to fix it. Istio Vet prevents misconfigurations by refusing to allow them in the first place. Global Istio configuration resources require a different solution, which is addressed by the Traffic Claim Enforcer solution. 

 

The Importance of Policy Frameworks

As companies embrace DevOps and microservice architectures, their teams are moving more quickly and autonomously than ever before. The result is a faster time to market for applications, but more risk to the business. The responsibility of understanding and managing the company’s security and compliance needs is now shifted left to teams that may not have the expertise or desire to take on this burden.  

Service mesh makes it easy to control policy and understand how policy settings will affect application behavior. In addition, analytics insights help you get the most out of policy through monitoring, vetting and policy violation analytics so you can quickly understand the best actions to take. 

Policy frameworks allow you to securely and efficiently deploy microservices applications while limiting risk and unlocking DevOps productivity. Key to this innovation is the ability to synthesize business-level goals, regulatory or legal requirements, operational metrics, and team-level rules into high performance service mesh policy that sits adjacent to every application. 


A good service mesh keeps your company’s services running they way they should, giving you observability, security and operational control plus access to engineering and support, so you are free to focus on adding more value to your business.