Aspen Mesh - Getting the Most Out of Your Service Mesh

How to Get the Most Out of Your Service Mesh

You’ve been hearing about service mesh. You have an idea of what it does and how it can help you manage your microservices. But what happens once you have one? How do you get as much out of it as you can?

Let’s start with a quick review of what a service mesh is, why you would need one, then move on to how to get the most out of your service mesh.

What's a Service Mesh?

  1. A transparent infrastructure layer that sits between your network and application, helping with communications between your microservices

  2. Could be your next game changing decision

A service mesh is designed to handle a high volume of service-to-service communication using application programming interfaces (APIs). It ensures that communication among containerized application services is fast, reliable and secure. The mesh provides critical capabilities including service discovery, load balancing, encryption, observability, traceability, authentication and authorization, and write-once, run anywhere policy for microservices in your Kubernetes clusters.

Service meshes also address challenges that arise when your application is being consumed by an end user. The first key capability is monitoring the health of services provided to the end user, and then tracing problems with that health quickly to the correct microservice. Next, you'll need to ensure communication is secure and resilient.

When Do You Need a Service Mesh?

We’ve been having lots of discussions with people spread across the microservices, Kubernetes and service mesh adoption curves. And while it’s clear that many enterprise organizations are at least considering microservices, many are still waiting to see best practices emerge before deciding on their own path forward. That means the landscape changes as needs are evolving. 

As an example, more organizations are looking to microservices for brownfield deployments, whereas – even a couple of years ago – almost everyone only considered building microservices architectures for greenfield. This tells us that as microservices technology and tooling continues to evolve, it’s becoming more feasible for non-unicorn companies to effectively and efficiently decompose the monolith into microservices. 

Think about it this way: in the past six months, the top three reasons we’ve heard people say they want to implement service mesh are:

  1. Observability – to better understand the behavior of Kubernetes clusters 
  2. mTLS – to add cluster-wide service encryption
  3. Distributed Tracing – to simplify debugging and speed up root cause analysis

Gauging the current state of the cloud-native infrastructure space, there’s no doubt that there’s still more exploration and evaluation of tools like Kubernetes and Istio. But the gap is definitely closing. Companies are closely watching the leaders in the space to see how they are implementing and what benefits and challenges they are facing. As more organizations successfully adopt these new technologies, it’s becoming obvious that while there’s a skills gap and new complexity that must be accounted for, the outcomes around increased velocity, better resiliency and improved customer experience mandates that many organizations actively map their own path with microservices. This will help to ensure that they are not left behind by the market leaders in their space.

Getting the Most Out of Your Service Mesh

Aspen Mesh - Getting the Most Out of Your Service MeshIn order to really stay ahead of the competition, you need to know best practices about getting the most out of your service mesh, recommendations from industry experts about how to measure your success, and ways to think about how to keep getting even more out of your technology.

But what do you want out of a service mesh? Since you’re reading this, there’s a good chance you’re responsible for making sure that your end users get the most out of your applications. That’s probably why you started down the microservices path in the first place.

If that’s true, then you’ve probably realized that microservices come with their own unique challenges, such as:

  • Increased surface area that can be attacked
  • Polyglot challenges
  • Controlling access for distributed teams developing towards a single application

That’s where a service mesh comes in. Service meshes are great at solving operational challenges and issues when running containers and microservices because they provide a uniform way to secure, connect and monitor microservices. 

TL;DR a good service mesh keeps your company’s services running they way they should, giving you the observability, security and traffic management capabilities you need to effectively manage and control containerized applications so you can focus on adding the most value to your business.

When Service Mesh is a Win/Win

Service mesh is an application that can help entire organizations work together for better outcomes. In other words, service mesh is the ultimate DevOps enabler.

Here are a few highlights of the value a service mesh provides across teams:

  • Observability: take system monitoring a step further by providing observability. Monitoring reports overall system health, while observability focuses on highly granular insights into the behavior of systems along with rich context
  • Security and Decreased Risk: better secure the services inside your network and quickly identify any compromising traffic entering your clusters
  • Operational Control: allow security and platform teams to set the right macro controls to enforce access controls, while allowing developers to make customizations they need to move quickly within defined guardrails
  • Increase Efficiency with a Developer Toolbox: remove the burden of managing infrastructure from the developer and provide developer-friendly features such as distributed tracing and easy canary deploys 

What’s the Secret to Getting the Most Out of Your Service Mesh?

There are a lot of things you can do to get more out of your service mesh. Here are three high level tactics to start with:

  1. Align on service mesh goals with your teams
  2. Choose the service mesh that can be broadly deployed to address your company's needs
  3. Measure your service mesh success over time in order to identify and make improvements

Still looking for more info about this? Check out the eBook: Getting the Most Out of Your Service Mesh.

Complete this form to get your copy of the eBook Getting the Most Out of Your Service Mesh:



Understanding Service Mesh

The Origin of Service Mesh

In the beginning, we had packets and packet-switched networks.

Everyone on the Internet — all 30 of them — used packets to build addressing, session establishment/teardown. And then, they’d need a retransmission scheme. Then, they’d build an ordered byte stream out of it.

Eventually, they realized they had all built the same thing. The RFCs for IP and TCP standardized this, operating systems provided a TCP/IP stack, so no application ever had to turn a best-effort packet network into a reliable byte stream.

We took our reliable byte streams and used them to make applications. Turns out that a lot of those applications had common patterns again — they requested things from servers, and then got responses. So, we separated these request/responses into metadata (headers) and body.

HTTP standardized the most widely deployed request/response protocol. Same story. App developers don't have to implement the mechanics of requests and responses. They can focus on the app on top.

There's a newer set of functionality that you need to build a reliable microservices application. Service discovery, versioning, zero-trust.... all the stuff popularized by the Netflix architecture, by 12-factor apps, etc. We see the same thing happening again - an emerging set of best practices that you have to build into each microservice to be successful.

So, service mesh is about putting all that functionality again into a layer, just like HTTP, TCP, packets, that's underneath your code, but creating a network for services rather than bytes.

Questions? Download The Complete Guide to Service Mesh or keep reading to find out more about what exactly a service mesh is.

What Is A Service Mesh?

A service mesh is a transparent infrastructure layer that sits between your network and application.

It’s designed to handle a high volume of service-to-service communications using application programming interfaces (APIs). A service mesh ensures that communication among containerized application services is fast, reliable and secure.

The mesh provides critical capabilities including service discovery, load balancing, encryption, observability, traceability, authentication and authorization, and the ability to control policy and configuration in your Kubernetes clusters.

Service mesh helps address many of the challenges that arise when your application is being consumed by the end user. Being able to monitor what services are communicating with each other, if those communications are secure, and being able to control the service-to-service communication in your clusters are key to ensuring applications are running securely and resiliently.

More Efficiently Managing Microservices

The self-contained, ephemeral nature of microservices comes with some serious upside, but keeping track of every single one is a challenge — especially when trying to figure out how the rest are affected when a single microservice goes down. The end result is that if you’re operating or developing in a microservices architecture, there’s a good chance part of your days are spent wondering what the hell your services are up to.

With the adoption of microservices, problems also emerge due to the sheer number of services that exist in large systems. Problems like security, load balancing, monitoring and rate limiting that had to be solved once for a monolith, now have to be handled separately for each service.

Service mesh helps address many of these challenges so engineering teams, and businesses, can deliver applications more quickly and securely.

Why You Might Care

If you’re reading this, you’re probably responsible for making sure that you and your end users get the most out of your applications and services. In order to do that, you need to have the right kind of access, security and support. That’s probably why you started down the microservices path.

If that’s true, then you’ve probably realized that microservices come with their own unique challenges, such as:

  1. Increased surface area that can be attacked
  2. Polyglot challenges
  3. Controlling access for distributed teams developing on a single application

That’s where a service mesh comes in.

A service mesh is an infrastructure layer for microservices applications that can help reduce the complexity of managing microservices and deployments by handling infrastructure service communication quickly, securely and reliably. 

Service meshes are great at solving operational challenges and issues when running containers and microservices because they provide a uniform way to secure, connect and monitor microservices. 

Here’s the point: a good service mesh keeps your company’s services running they way they should. A service mesh designed for the enterprise, like Aspen Mesh, gives you all the observability, security and traffic management you need — plus access to engineering and support, so you can focus on adding the most value to your business.

And that is good news for DevOps.

The Rise of DevOps - and How Service Mesh Is Enabling It

It’s happening, and it’s happening fast.

Companies are transforming internal orgs and product architectures along a new axis of performance. They’re finding more value in iterations, efficiency and incremental scaling, forcing them to adopt DevOps methodologies. This focus on time-to-market is driving some of the most cutting-edge infrastructure technology that we have ever seen. Technologies like containers and Kubernetes, and a focus on stable, consistent and open APIs allow small teams to make amazing progress and move at the speeds they require. These technologies have reduced the friction and time to market.

The adoption of these technologies isn’t perfect, and as companies deploy them at scale, they realize that they have inadvertently increased complexity and de-centralized ownership and control. In many cases, it’s challenging to understand the entire system.

A service mesh enables DevOps teams by helping manage this complexity. It provides autonomy and freedom for development teams through a stable and scalable platform, while simultaneously providing a way for platform teams to enforce security, policy and compliance standards.

This empowers your development teams to make choices based on the problems they are solving rather than being concerned with the underlying infrastructure. Dev teams now have the freedom to deploy code without the fear of violating compliance or regulatory guidelines, and platform teams can put guardrails in place to ensure your applications are secure and resilient.

Want to learn more? Get the Complete Guide to Service Mesh here.


From NASA to Service Mesh

The New Stack recently published a podcast featuring our CTO, Andrew Jenkins discussing How Service Meshes Found a Former Space Dust Researcher. In the podcast, Andrew talks about how he moved from working on electrical engineering and communication protocols for NASA to software and finally service mesh development here at Aspen Mesh.

“My background is in electrical engineering, and I used to work a lot more on the hardware side of it, but I did get involved in communication, almost from the physical layer, and I worked on some NASA projects and things like that,” said Jenkins. “But then my career got further and further up into the software side of things, and I ended up at a company called F5 Networks. [Eventually] this ‘cloud thing’ came along, and F5 started seeing a lot of applications moving to the cloud. F5 offers their product in a version that you use in AWS, so what I was working on was an open source project to make a Kubernetes ingress controller for the F5 device. That was successful, but what we saw was that a lot of the traffic was shifting to the inside of the Kubernetes cluster. It was service-to-service communication from all these tiny things--these microservices--that were designed to be doing business logic. So this elevated the importance of communication...and that communication became very important for all of those tiny microservices to work together to deliver the final application experience for developers. So we started looking at that microservice communication inside and figuring out ways to make that more resilient, more secure and more observable so you can understand what’s going on between your applications.”

In addition, the podcast covers the evolution of service mesh, more details about tracing and logging, canaries, Kubernetes, YAML files and other surrounding technologies that extend service mesh to help simplify microservices management.

“I hope service meshes become the [default] way to deal with distributed tracing or certificate rotation. So, if you have an application, and you want it to be secure, you have to deal with all these certs, keys, etc.,” Jenkins said. “It’s not impossible, but when you have microservices, you do not have to do it a whole lot more times. So that’s why you get this better bang for the buck by pushing that down into that service mesh layer where you don’t have to repeat it all the time.”

To listen to the entire podcast, visit The News Stack’s post.

Interested in reading more articles like this? Subscribe to the Aspen Mesh blog:


The Complete Guide to Service Mesh

What’s Going On In The Service Mesh Universe?

Service meshes are relatively new, extremely powerful and can be complex. There’s a lot of information out there on what a service mesh is and what it can do, but it’s a lot to sort through. Sometimes, it’s helpful to have a guide. If you’ve been asking questions like “What is a service mesh?” “Why would I use one?” “What benefits can it provide?” or “How did people even come up with the idea for service mesh?” then The Complete Guide to Service Mesh is for you.

Check out the free guide to find out:

  • The service mesh origin story
  • What a service mesh is
  • Why developers and operators love service mesh
  • How a service mesh enables DevOps
  • Problems a service mesh solves

The Landscape Right Now

A service mesh overlaps, complements, and in some cases, replaces many tools that are commonly used to manage microservices. Last year was all about evaluating and trying out service meshes. But while curiosity about service mesh is still at a peak, enterprises are already in the evaluation and adoption process.

The capabilities service mesh can add to ease managing microservices applications at runtime are clearly exciting to early adopters and companies evaluating service mesh. Conversations tell us that many enterprises are already using microservices and service mesh, and many others are planning to deploy in the next six months. And if you’re not yet sure about whether or not you need a service mesh, check out the recent Gartner, 451 and IDC reports on microservices — all of which say a service mesh will be mandatory by 2020 for any organization running microservices in production.

Get Started with Service Mesh

Are you already using Kubernetes and Istio? You might be ready to get started using a service mesh. Download Aspen Mesh here or contact us to talk with a service mesh expert about getting set up for success.

Get the Guide

Fill out the form below to get your copy of The Complete Guide to Service Mesh.