Where A Service Mesh Fits in the Landscape

Service mesh is helping to take the cloud native and open source communities to the next level, and we’re starting to see increased adoption across many types of companies — from start-ups to the enterprise. 

For any company, while a service mesh overlaps, complements, and in some cases replaces many tools that are commonly used to manage microservices, many technologies are involved in the service mesh landscape. In the following, we’ve explained some ways that a service mesh fits with other commonly used container tools.

Service Mesh Landscape - Aspen Mesh

Container Orchestration

Kubernetes provides scheduling, auto-scaling and automation functionality that solves most of the build and deploy challenges that come with containers. Where it leaves off, and where service mesh steps in, is solving some critical runtime challenges with containerized applications. A service mesh adds uniform metrics, distributed tracing, encryption between services and fine-grained observability of how your cluster is behaving at runtime. Read more about why container orchestration and service mesh are critical for cloud native deployments

API Gateway

The main purpose of an API gateway is to accept traffic from outside your network and distribute it internally. The main purpose of a service mesh is to route and manage traffic within your network. A service mesh can work with an API gateway to efficiently accept external traffic then effectively route that traffic once it’s in your network. There is some nuance in the problems solved at the edge with an API Gateway compared to service-to-service communication problems a service mesh solves within a cluster. But with the evolution of cluster-deployment patterns, these nuances are becoming less important. If you want to do billing, you’ll want to keep your API Gateway. But if you’re focused on routing and authentication, you can likely replace an API gateway with service mesh. Read more on how API gateways and service meshes overlap.

Global ADC

Load balancers focus on distributing workloads throughout the network and ensuring the availability of applications and services. Load balancers have evolved into Application Delivery Controllers (ADCs) that are platforms for application delivery, ensuring that an organization’s critical applications are highly available and secure. While basic load balancing remains the foundation of application delivery, modern ADCs offer much more enhanced functionality such as SSL/TLS offload, caching, compression, rate-shaping, intrusion detection, application firewalls and remote access into a single strategic point. A service mesh provides basic load balancing, but if you need advanced capabilities such as SSL/TLS offload and rate-shaping you should consider pairing an ADC with service mesh.

mTLS

Service mesh provides defense with mutual TLS encryption of the traffic between your services. The mesh can automatically encrypt and decrypt requests and responses, removing that burden from the application developer. It can also improve performance by prioritizing the reuse of existing, persistent connections, reducing the need for the computationally expensive creation of new ones. Aspen Mesh provides more than just client server authentication and authorization, it allows you to understand and enforce how your services are communicating and prove it cryptographically. It automates the delivery of the certificates and keys to the services, the proxies use them to encrypt the traffic (providing mutual TLS), and periodically rotates certificates to reduce exposure to compromise. You can use TLS to ensure that Aspen Mesh instances can verify that they’re talking to other Aspen Mesh instances to prevent man-in-the-middle attacks.

CI/CD

Modern Enterprises manage their applications via an agile, iterative lifecycle model.  Continuous Integration and Continuous Deployment systems automate the build, test, deploy and upgrade stages.  Service Mesh adds power to your CI/CD systems, allowing operators to build fine-grained deployment models like canary, A/B, automated dev/stage/prod promotion, and rollback.  Doing this in the service mesh layer means the same models are available to every app in the enterprise without app modification. You can also up-level your CI testing using techniques like traffic mirroring and fault injection to expose every app to complicated, hard-to-simulate fault patterns before you encounter them with real users.

Credential Management 

We live in an API economy, and machine-to-machine communication needs to be secure.  Microservices have credentials to authenticate themselves and other microservices via TLS, and often also have app-layer credentials to serve as clients of external APIs. It’s tempting to focus only on the cost of initially configuring these credentials, but don’t forget the lifecycle – rotation, auditing, revocation, responding to CVEs. Centralizing these credentials in the service mesh layer reduces scope and improves the security posture.

APM

Traditional Application Performance Monitoring tools provide a dashboard that surfaces data that allow users to monitor their applications in one place. A service mesh takes this one step further by providing observability. Monitoring is aimed at reporting the overall health of systems, so is best limited to key business and systems metrics derived from time-series based instrumentation. Observability focuses on providing highly granular insights into the behavior of systems along with rich context, perfect for debugging purposes. Aspen Mesh provides deep observability that allows you to understand current state of your system, and also provide a way to better understand system performance and behavior, even during the what can be perceived as normal operation of a system. Read more about the importance of observability in distributed systems.

Serverless

Serverless computing transforms source code into running workloads that execute only when called. The key difference between service mesh and serverless is that with serverless, a service can be scaled down to 0 instances if the system detects that it is not being used, thus saving you from the cost of continually having at least one instance running. Serverless can help organizations reduce infrastructure costs, while allowing developers to focus on writing features and delivering business value. If you’ve been paying attention to service mesh, these advantages will sound familiar. The goals with service mesh and serverless are largely the same – remove the burden of managing infrastructure from developers so they can spend more time adding business value. Read more about service mesh and serverless computing.

Learn More

If you’d like to learn more about how a service mesh can help you and your company, schedule a time to talk with one of our experts, or take a look at The Complete Guide to Service Mesh.