Aspen Mesh digital transformation service mesh

Digital Transformation: How Service Mesh Can Help

Your Company’s Digital Transformation

It’s happening everywhere, and it’s happening fast. In order to meet consumers head on in the best, most secure ways, enterprises are jumping on the digital transformation train (check out this Forrester report). 

Several years ago, digital transformations saw companies moving from monolithic architectures towards microservices and Kubernetes, but service mesh was in its infancy. No one knew they'd need something to help manage service-to-service communication. Now, with increasing complexity and demands coupled with thinly-stretched resources or teams without service mesh expertise, supported service mesh is becoming a good solution for many--especially for DevOps teams.

Service Mesh for DevOps

"DevOps" is a term used to describe the business relationship between development and IT operations. Mostly, the term is used when referring to improving communication and collaboration between the two teams. But while Dev is responsible for creating new functionality to drive business, Ops is often the unsung--but extremely important--hero behind the scenes. In IT Ops, you’re on the hook for strategy development, system design and performance, quality control, direction and coordination of your team all while collaborating with the Dev team and other internal stakeholders to achieve your business’s goals and drive profitability. Ultimately, it’s the Dev and Ops teams who are responsibility to ensure that teams are communicating effectively, systems are monitored correctly, high customer satisfaction is achieved and projects and issue resolution are completed on time. A service mesh can help with this by enabling DevOps.

Integrating a Service Mesh: Align with Business Objectives

As you think about adopting a service mesh, keep in mind that your success over time is largely dependent on aligning with your company’s business objectives. Sharing business objectives like these with your service mesh team will help to ensure you get--and keep--the features and capabilities that you really need, when you need them, and that they stay relevant.

What are some of your company’s business objectives? Here are three we’ve identified that a service mesh can help to streamline:

1. Automating More Process (i.e. Removing Toil)
Automating processes frees up your team from mundane tasks so they can focus on more important projects. Automation can save you time and money.

2. Increasing Infrastructure Performance
Building and maintaining a battle-tested environment is key to your end users experience, and therefore churn or customer retention rates and your company’s bottom line.

In addition, much of your time is spent developing strategies to monitor your systems and working through issue resolution as quickly as possible--whether issues pop up during the workday, or in the middle of the night. Fortunately, because service mesh come with observability, security and resilience features, it can help alleviate these responsibilities, decreasing MTTD and MTTR.

3. Maintaining Delivery to Customers
Reducing friction in the user experience is the name of the game these days, so UX and reliability are key to keeping your end users happy. If you’re looking at a service mesh, you’re already using a microservices architecture, and you’re likely using Kubernetes clusters. But once those become too complex in production--or don’t have all the features you need-- it’s time to add a service mesh into the mix. Service mesh’s observability features like cluster health monitoring, service traffic monitoring, easy debugging and root cause identification with distributed tracing help with this. In addition, an intuitive UI is key to surfacing these features in a way that is easy to understand and manipulate, so make sure you’re looking at a service mesh that’s easy for your Dev team to use. This will help provide a more seamless (and secure) experience for your end users.

Evolution; Not Revolution

How do you actually go about approaching the process of integrating a service mesh? What will drive success is for you to have agility and stability. But that can be a tall order, so it can be helpful to approach integrating a service mesh as evolution, rather than revolution. Three key areas to consider while you’re evaluating a service mesh include:

  1. Mitigating risk
  2. Production readiness
  3. Policy frameworks

Mitigating Risk
Risk can be terrifying, so it’s imperative to take steps to ensure that risk is mitigated as much as possible. The only time your company should be making headlines is because of good news. Ensuring security, compliance, and data integrity is the way to go. With security and compliance at top of mind for many, it’s important to address security head on. 

With a well-designed enterprise service mesh, you can expect plenty of security, compliance and policy features so it’s easy for your company to get a zero-trust network. Features can include anything from ensuring the principle of least privilege and secure default settings to technical features such as fine-grained RBAC and incremental mTLS.

Production Readiness
Your applications are ready to be used by your end users, and your technology stack needs to be ready too. What makes a real impact here is reliability. Service mesh features like dynamic request routing, fast retries, configuration vetters, circuit breaking and load balancing greatly increase the resiliency of microservice architectures. Support is also a feature that some enterprises will want to consider in light of whether service mesh expertise is a core in-house skill for the business. Having access to an expert support team can make a tremendous difference in your production readiness and your end users’ experiences.

Policy Frameworks
While configuration is useful for setting up how a system operates, policy is useful in dictating how a system responds when something happens. With a service mesh, the power of policy and configuration combined provides capabilities that can drive outcome-based behavior from your applications. A policy catalog can accelerate this behavior, while analytics examines policy violations and understands the best actions to take. This improves developer productivity with canary, authorization and service availability policies.

How to Measure Service Mesh Success

No plan is complete without a way to measure, iterate and improve your success over time. So how do you go about measuring the success of your service mesh? There are a lot of factors to take into consideration, so it’s a good idea to talk to your service mesh provider in order to leverage their expertise. But in the meantime, there are a few things you can consider to get an idea of how well your service mesh is working for you. Start by finding a good way to measure 1) how your security and compliance is impacted, 2)  how much you’re able to change downtime and 3) differences you see in your efficiency.

Looking for more specific questions to ask? Check out the eBook, Getting the Most Out of Your Service Mesh for ideas on the right questions to ask and what to measure for success.

Service Mesh Landscape - Aspen Mesh

The Service Mesh Landscape

Where A Service Mesh Fits in the Landscape

Service mesh is helping to take the cloud native and open source communities to the next level, and we’re starting to see increased adoption across many types of companies -- from start-ups to the enterprise. 

For any company, while a service mesh overlaps, complements, and in some cases replaces many tools that are commonly used to manage microservices, many technologies are involved in the service mesh landscape. In the following, we've explained some ways that a service mesh fits with other commonly used container tools.

Service Mesh Landscape - Aspen Mesh

Container Orchestration

Kubernetes provides scheduling, auto-scaling and automation functionality that solves most of the build and deploy challenges that come with containers. Where it leaves off, and where service mesh steps in, is solving some critical runtime challenges with containerized applications. A service mesh adds uniform metrics, distributed tracing, encryption between services and fine-grained observability of how your cluster is behaving at runtime. Read more about why container orchestration and service mesh are critical for cloud native deployments

API Gateway

The main purpose of an API gateway is to accept traffic from outside your network and distribute it internally. The main purpose of a service mesh is to route and manage traffic within your network. A service mesh can work with an API gateway to efficiently accept external traffic then effectively route that traffic once it’s in your network. There is some nuance in the problems solved at the edge with an API Gateway compared to service-to-service communication problems a service mesh solves within a cluster. But with the evolution of cluster-deployment patterns, these nuances are becoming less important. If you want to do billing, you’ll want to keep your API Gateway. But if you’re focused on routing and authentication, you can likely replace an API gateway with service mesh. Read more on how API gateways and service meshes overlap.

Global ADC

Load balancers focus on distributing workloads throughout the network and ensuring the availability of applications and services. Load balancers have evolved into Application Delivery Controllers (ADCs) that are platforms for application delivery, ensuring that an organization’s critical applications are highly available and secure. While basic load balancing remains the foundation of application delivery, modern ADCs offer much more enhanced functionality such as SSL/TLS offload, caching, compression, rate-shaping, intrusion detection, application firewalls and remote access into a single strategic point. A service mesh provides basic load balancing, but if you need advanced capabilities such as SSL/TLS offload and rate-shaping you should consider pairing an ADC with service mesh.


Service mesh provides defense with mutual TLS encryption of the traffic between your services. The mesh can automatically encrypt and decrypt requests and responses, removing that burden from the application developer. It can also improve performance by prioritizing the reuse of existing, persistent connections, reducing the need for the computationally expensive creation of new ones. Aspen Mesh provides more than just client server authentication and authorization, it allows you to understand and enforce how your services are communicating and prove it cryptographically. It automates the delivery of the certificates and keys to the services, the proxies use them to encrypt the traffic (providing mutual TLS), and periodically rotates certificates to reduce exposure to compromise. You can use TLS to ensure that Aspen Mesh instances can verify that they’re talking to other Aspen Mesh instances to prevent man-in-the-middle attacks.


Modern Enterprises manage their applications via an agile, iterative lifecycle model.  Continuous Integration and Continuous Deployment systems automate the build, test, deploy and upgrade stages.  Service Mesh adds power to your CI/CD systems, allowing operators to build fine-grained deployment models like canary, A/B, automated dev/stage/prod promotion, and rollback.  Doing this in the service mesh layer means the same models are available to every app in the enterprise without app modification. You can also up-level your CI testing using techniques like traffic mirroring and fault injection to expose every app to complicated, hard-to-simulate fault patterns before you encounter them with real users.

Credential Management 

We live in an API economy, and machine-to-machine communication needs to be secure.  Microservices have credentials to authenticate themselves and other microservices via TLS, and often also have app-layer credentials to serve as clients of external APIs. It’s tempting to focus only on the cost of initially configuring these credentials, but don’t forget the lifecycle – rotation, auditing, revocation, responding to CVEs. Centralizing these credentials in the service mesh layer reduces scope and improves the security posture.


Traditional Application Performance Monitoring tools provide a dashboard that surfaces data that allow users to monitor their applications in one place. A service mesh takes this one step further by providing observability. Monitoring is aimed at reporting the overall health of systems, so is best limited to key business and systems metrics derived from time-series based instrumentation. Observability focuses on providing highly granular insights into the behavior of systems along with rich context, perfect for debugging purposes. Aspen Mesh provides deep observability that allows you to understand current state of your system, and also provide a way to better understand system performance and behavior, even during the what can be perceived as normal operation of a system. Read more about the importance of observability in distributed systems.


Serverless computing transforms source code into running workloads that execute only when called. The key difference between service mesh and serverless is that with serverless, a service can be scaled down to 0 instances if the system detects that it is not being used, thus saving you from the cost of continually having at least one instance running. Serverless can help organizations reduce infrastructure costs, while allowing developers to focus on writing features and delivering business value. If you’ve been paying attention to service mesh, these advantages will sound familiar. The goals with service mesh and serverless are largely the same – remove the burden of managing infrastructure from developers so they can spend more time adding business value. Read more about service mesh and serverless computing.

Learn More

If you'd like to learn more about how a service mesh can help you and your company, schedule a time to talk with one of our experts, or take a look at The Complete Guide to Service Mesh.

When Do You Need A Service Mesh - Aspen Mesh

When Do You Need A Service Mesh?

When You Need A Service Mesh - Aspen MeshOne of the questions I often hear is: "Do I really need a service mesh?" The honest answer is "It depends." Like nearly everything in the technology space (or more broadly "nearly everything"), this depends on the benefits and costs. But after having helped users progress from exploration to production deployments in many different scenarios, I'm here to share my perspective on which inputs to include in your decision-making process.

A service mesh provides a consistent way to connect, secure and observe microservices. Most service meshes are tightly integrated with an orchestration platform, commonly Kubernetes. There's no way around it; a service mesh is another thing, and at least part of your team will have to learn it. That's a cost, and you should compare that cost to the benefits of operational simplification you may achieve.

But apart from costs and benefits, what should you be asking in order to determine if you really need a service mesh? The number of microservices you’re running, as well as urgency and timing, can have an impact on your needs.

How Many Microservices?

If you're deploying your first or second microservice, I think it is just fine to not have a service mesh. You should, instead, focus on learning Kubernetes and factoring stateless containers out of your applications first. You will naturally build familiarity with the problems that a service mesh can solve, and that will make you much better prepared to plan your service mesh journey when the time comes.

If you have an existing application architecture that provides the observability, security and resilience that you need, then you are already in a good place. For you, the question becomes when to add a service mesh. We usually see organizations notice the toil associated with utility code to integrate each new microservice. Once that toil gets painful enough, they evaluate how they could make that integration more efficient. We advocate using a service mesh to reduce this toil.

The exact point at which service mesh benefits clearly outweigh costs varies from organization to organization. In my experience, teams often realize they need a consistent approach once they have five or six microservices. However, many users push to a dozen or more microservices before they notice the increasing cost of utility code and the increasing complexity of slight differences across their applications. And, of course, some organizations continue scaling and never choose a service mesh at all, investing in application libraries and tooling instead. On the other hand, we also work with early birds that want to get ahead of the rising complexity wave and introduce service mesh before they've got half-a-dozen microservices. But the number of microservices you have isn’t the only part to consider. You’ll also want to consider urgency and timing. 

Urgency and Timing

Another part of the answer to “When do I need a service mesh?” includes your timing. The urgency of considering a service mesh depends on your organization’s challenges and goals, but can also be considered by your current process or state of operations. Here are some states that may reduce or eliminate your urgency to use a service mesh:

  1. Your microservices are all written in one language ("monoglot") by developers in your organization, building from a common framework.
  2. Your organization dedicates engineers to building and maintaining org-specific tooling and instrumentation that's automatically built into every new microservice.
  3. You have a partially or totally monolithic architecture where application logic is built into one or two containers instead of several.
  4. You release or upgrade all-at-once after a manual integration process.
  5. You use application protocols that are not served by existing service meshes (so usually not HTTP, HTTP/2, gRPC).

On the other hand, here are some signals that you will need a service mesh and may want to start evaluating or adopting early:

  1. You have microservices written in many different languages that may not follow a common architectural pattern or framework (or you're in the middle of a language/framework migration).
  2. You're integrating third-party code or interoperating with teams that are a bit more distant (for example, across a partnership or M&A boundary) and you want a common foundation to build on.
  3. Your organization keeps "re-solving" problems, especially in the utility code (my favorite example: certificate rotation, while important, is no scrum team's favorite story in the backlog).
  4. You have robust security, compliance or auditability requirements that span services.
  5. Your teams spend more time localizing or understanding a problem than fixing it.

I consider this last point the three-alarm fire that you need a service mesh, and it's a good way to return to the quest for simplification. When an application is failing to deliver a quality experience to its users, how does your team resolve it? We work with organizations that report that finding the problem is often the hardest and most expensive part. 

What Next?

Once you've localized the problem, can you alleviate or resolve it? It's a painful situation if the only fix is to develop new code or rebuild containers under pressure. That's where you see the benefit from keeping resiliency capabilities independent of the business logic (like in a service mesh).

If this story is familiar to you, you may need a service mesh right now. If you're getting by with your existing approach, that’s great. Just keep in mind the costs and benefits of what you’re working with, and keep asking:

  1. Is what you have right now really enough, or are spending too much time trying to find problems instead of developing and providing value for your customers?
  2. Are your operations working well with the number of microservices you have, or is it time to simplify?
  3. Do you have critical problems that a service mesh would address?

Keeping tabs on the answers to these questions will help you determine if — and when — you really need a service mesh.

In the meantime if you're interested in learning more about service mesh, check out The Complete Guide to Service Mesh.

Aspen Mesh - Service Mesh Security and Complinace

Aspen Mesh 1.4.4 & 1.3.8 Security Update

Aspen Mesh is announcing the release of 1.4.4 and 1.3.8 (based on upstream Istio 1.4.4 & 1.3.8), both addressing a critical security update. All our customers are strongly encouraged to upgrade to these versions immediately based on your currently deployed Aspen Mesh version.

The Aspen Mesh team reported this CVE to the Istio community as per Istio CVE reporting guidelines, and our team was able to further contribute to the community--and for all Istio users--by fixing this issue in upstream Istio. As this CVE is extremely easy to exploit and the risk score was deemed very high (9.0), we wanted to respond  with urgency by getting a patch in upstream Istio and out to our customers as quickly as possible. This is just one of the many ways we are able to provide value to our customers as a trusted partner by ensuring that the pieces from Istio (and Aspen Mesh) are secure and stable.

Below are details about the CVE-2020-8595, steps to verify whether you’re currently vulnerable, and how to upgrade to the patched releases.

CVE Description

A bug in Istio's Authentication Policy exact path matching logic allows unauthorized access to resources without a valid JWT token. This bug affects all versions of Istio (and Aspen Mesh) that support JWT Authentication Policy with path based trigger rules (all 1.3 & 1.4 releases). The logic for the exact path match in the Istio JWT filter includes query strings or fragments instead of stripping them off before matching. This means attackers can bypass the JWT validation by appending “?” or “#” characters after the protected paths.

Example Vulnerable Istio Configuration

Configure JWT Authentication Policy triggers on an exact HTTP path match "/productpage" like this. In this example, the Authentication Policy is applied at the ingress gateway service so that any requests with the exact path matching “/productpage” requires a valid JWT token. In the absence of a valid JWT token, the request is denied and never forwarded to the productpage service. However, due to this CVE, any request made to the ingress gateway with path "/productpage?" without any valid JWT token is not denied but sent along to the productpage service, thereby allowing access to a protected resource without a valid JWT token. 


Since this vulnerability is in the Envoy filter added by Istio, you can also check the proxy image you’re using in the cluster locally to see if you’re currently vulnerable. Download and run this script to verify if the proxy image you’re using is vulnerable. Thanks to Francois Pesce from the Google Istio team in helping us create this test script.

Upgrading to This Version

Please follow upgrade instructions in our documentation to upgrade to these versions. Since this vulnerability affects the sidecar proxies and gateways (ingress and egress), it is important to follow the post-upgrade tasks here and rolling upgrade all of your sidecar proxies and gateways.

Aspen Mesh Policy Framework - After

Announcing Aspen Mesh Secure Ingress Policy

In the new Application Economy, developers are building and deploying applications at a far greater frequency than ever before. Organizations gain this agility by shifting to microservices architectures, powered by Kubernetes and continuous integration and delivery (CI/CD). For businesses to derive value from these applications, they need to be exposed to the outside world in a secure way so that their customers can access them--and have a great user experience. That’s such an obvious statement, you’re probably wondering why I even bothered saying it.

Well, within most organizations, securely exposing an application to the outside world is complicated. Ports, protocols, paths, and auth requirements need to be collected. Traffic routing resources and authentication policies need to be configured. DNS entries and TLS certificates need to be created and mounted. Application teams know some of these things and platform owners know others. Pulling all of this together is painful and time consuming.

This problem is exacerbated by the lack of APIs mapping intent to the persona performing the task. Let’s take a quick look at the current landscape.

Kubernetes allows orchestration (deploying and upgrading) of applications but doesn't provide any capabilities to capture application behavior like protocol, paths and security requirements for these APIs. For securely exposing an application to their users, platform operators currently need to capture this information from developers in private conversations and create additional Kubernetes resources like Ingress which in turn creates the plumbing to allow traffic from outside the cluster and route it to the appropriate backend application. Alternatively, advanced routing capabilities in Istio make it possible to control more aspects of traffic management, and at the same time allows developers to offload functionality like JWT Authentication from their applications to the infrastructure.

But the missing piece in both of these scenarios is a reliable and scalable way of gathering information about the applications from the developers independent of platform operators and enabling platform operators to securely configure the resources they need to expose these applications.

Additionally, configuring the Ingress or Istio routing APIs, is only a part of the puzzle. Operators also need to set up domain names (DNS) and get domain certificates (static or dynamic via Let’s Encrypt for example) in order to secure the traffic getting into their clusters. All of this requires managing a lot of moving pieces with the possibility of failures in multiple steps on the way.

Aspen Mesh Policy Framework - Before


To solve these challenges, we are excited to announce the release of Aspen Mesh Secure Ingress Policy.

A New Way to Securely Expose Your Applications

Our goal for developing this new Secure Ingress Policy framework is to help streamline communication between application developers and platform operators. With this new feature, both principals can be productive, but also work together.

The way it works is application developers provide a specification for their service which they can store in their code management systems and communicate to Aspen Mesh through an Application API. This spec includes the service port and protocols to expose via Ingress and the API paths and authentication requirements (e.g., JWT validation).

Platform operators provide a specification that defines the security and networking aspects of the platform and communicates it to Aspen Mesh via a Secure Ingress API. This spec includes certificate secrets, domain name, and JWKS server and issuer.

Aspen Mesh Policy Framework - After


Aspen Mesh takes these inputs and creates all of the necessary system resources like Istio Gateways, VirtualServices, Authentication Policies, configuring DNS entries and retrieving certificates to enable secure access to the application. If you have configured these by hand, you know the complexity involved in getting this right. With this new feature, we want our customers to focus on what’s important to them and let Aspen Mesh take care of your infrastructure needs. Additionally, the Aspen Mesh controllers always keep the resources in sync and augment them as the Secure Ingress and Application resources are updated.

Another important benefit of mapping APIs to personas is the ability to create ownership and storing configuration in the right place.  Keep the application-centric specifications in code right next to the application so that you can review and update both as part of your normal code review process. You don’t need another process or workflow to apply configuration changes out-of-band with your code changes. Also because these things live together, they can naturally be deployed at the same time, thereby reducing misconfigurations.

The overarching goals for our APIs were to enable platform operators to retain the strategic point of control to enforce policies while allowing application developers to move quickly and deliver customer-facing features. And most importantly, allowing our customer’s customers to use the application reliably and securely. 

Today, we're natively integrated with AWS Route 53 and expect to offer integrations with Azure and GCP in the near future. We also retrieve and renew domain certificates from Let’s Encrypt, and the only thing that operators need to provide is their registered email address and the rest is handled by the Aspen Mesh control plane. 

Interested in learning more about Secure Ingress Policy? Reach out to one of our experts to learn more about how you can implement Aspen Mesh’s Secure Ingress Policies at your organization.

How A Service Mesh Can Make Application Delivery More Secure

How A Service Mesh Can Make Application Delivery More Secure

What is the biggest business advantage that today’s companies have? Their level of agility. 

A business’s agility is what allows them to rapidly grow their revenue streams, respond to customer needs and defend against disruption. It is the need for agility that drives digital transformations and pushes companies to define new ways of working, develop new application architectures and embrace cloud and container technologies.

But agility alone won’t get a business where they need to be; agility with stability is the critical competitive advantage. Companies that can move faster and rapidly meet evolving customer needs — while staying out of the news for downtime and security breaches — will be the winners of tomorrow.

Service meshes help organizations achieve agility with stability by increasing the visibility and observability of their microservices, allowing them to gain control over a complex solution and to enforce their applications’ security and compliance requirements. As companies continue to adopt cloud native technologies, they must not lose sight of ensuring that the applications they deliver are secure and compliant and a service mesh provides many components in its tool box that allows them to do that.

Let the Experts Be Experts

In order to ensure that applications are secure, organizations need security and compliance experts. And, those experts need to be leveraged to create business-wide policies that protect customer and company data. However, all too often in the DevOps world, the implementation and application of those policies is left to application teams that are already implementing the individual microservices that make up the larger application. The individual teams do not have the expertise or context to understand the larger security needs of the business, or worse, they may see security requirements as an impediment to delivering their code to production on schedule.

Service mesh can let experts be experts by allowing them to create security and authorization policies that can be applied as a transparent layer under the application services regardless of the application developer’s decisions. By creating this security layer, the burden of implementation becomes aligned with the people who have the most interest in its success. The friction is also removed from the people who are least invested. This allows the business to be confident that their applications are as compliant — and their data is as secure — as their risk profile requires.

Encryption and Identity for Zero Trust

Data needs to be protected at all times, not just while it is at rest in a database somewhere. This includes ensuring that data is encrypted while moving between microservices, regardless of whether that data hits the wire on the network. Protecting that data means that you know:

  1. Who has access to the data
  2. That you trust them
  3. That they are sending and receiving the data securely

Because a service mesh is a transparent infrastructure layer that sits between the network and the microservices, on that network is the perfect place to ensure data encryption, identity, trust and permission.

By deploying a service mesh, organizations can ensure a secure by default posture in a zero-trust environment without changing existing applications or burdening application developers with complex authentication schemes, certificate management or permission revocation and additions. By delegating those functionalities to the mesh, organizations can easily deploy a more secure and compliant application environment with greater efficiency, less overhead and more confidence in their security posture.

Find and Fix with a Service Mesh

Mistakes will happen and security policies will have holes in them. Organizations shouldn't expect people and the policies they create to be perfect, but they must expect that they find and fix those mistakes before others do and exploit them. Some of this can be done with tools and libraries that run inside of the application’s code or container, or with firewalls and other products that run in the physical network. But these techniques miss one key element: what is going on as the service’s requests are coming in and out of the application while those requests are inside of the cluster and its hosts.

A service mesh, especially Istio-based sidecar meshes like Aspen Mesh, provides organizations with a unique view into every microservice’s request/response behavior. Along with this additional visibility, you can understand the behavior of a service’s traffic before and after it leaves the application’s code and container to form a request trace from source to destination and back. Not only does this allow you to find anomalous requests, unknown traffic sources and destinations, it allows you and stop them from accessing services that they should not have access to through security and policy changes. Even more importantly, these policy changes can happen without directly impacting or changing the application, thus reducing the amount of time it takes to close security holes while lessening the overall risk of exploits.

As organizations continue to embrace cloud and container technologies — and their use of those technologies matures and scales — a service mesh will become a vital part of their security and compliance strategy.

Learn More About Securing Containerized Applications

Interested in learning more about service mesh and security? Fill out the form below to get the white paper on how a service mesh can help you adopt a Zero-Trust security posture for your containerized applications.


Top 3 Service Mesh Developments in 2020

In 2019, we saw service mesh move beyond an experimental technology and into a solution that organizations are beginning to learn is an elemental building block for any successful Kubernetes deployment. Adoption of service mesh at scale, across companies large and small, began to gain steam. As the second wave of adopters watched the cutting edge adopters trial and succeed with service mesh technology, they too began to evaluate service mesh to address the challenges Kubernetes leaves on the table. 

In tandem with growing adoption of service mesh, 2019 offered a burgeoning service mesh market. Istio and Linkerd keep chugging along, and the tooling and vendor ecosystem around Istio almost tripled throughout the year. But there were also many new players that entered the market providing alternative approaches to solving layer 7 networking challenges. Meshes, such as those Kuma and Maesh offer, have emerged to provide different approaches to service mesh in order to address various edge use cases. We also saw the introduction of tools like SMI Spec and Meshery attempt to engage an early market that is flourishing due to immense opportunity, but has yet to contract while key players are waiting for the market to choose the winners first. Adjacent projects like Network Service Mesh bring service mesh principles to lower layers of the stack.

While there is still much to be settled in the service mesh space, the value of service mesh as a technology pattern is clear, as evidenced by the recently released “Voice of the Enterprise: DevOps,” 1H2019 survey conducted by 451 Research

While still a nascent market, the interest in and plan to adopt service mesh as a critical piece of infrastructure is quickly catching up to that of Kubernetes and containers. 

Service Mesh in 2020: The Top-3 Developments 

1. A quickly growing need for service mesh

Kubernetes is exploding. It has become the preferred choice for container orchestration in the enterprise and in greenfield deployments. There are real challenges that are causing brownfield to lag behind, but those are being explored and solved. Yes, Kubernetes is a nascent technology. And yes, much of the world is years away from adopting it. But it’s clear that Kubernetes has become--and will continue to be--a dominant force in the world of software. 

If Kubernetes has won and the scale and complexity of Kubernetes-based applications will increase, there is a tipping point where service mesh becomes all but required to effectively manage those applications. 

2. Istio Will Be Hard to Beat

There’s likely room for a few other contenders in the market, but we will see the market consolidation begin in 2020. In the long term, it’s probable that we’ll see a Kubernetes-like situation where a winner emerges and companies begin to standardize around that winner. It’s conceivable that service mesh may not be the technology pattern that is picked to solve layer 7 networking issues. But if that does happen, it seems likely that Istio becomes the de facto service mesh. There are many arguments for and against this, but the most telling factor is the ecosystem developing around Istio. Almost every major software vendor has an Istio solution or integration, and the Istio open source community far surpasses any others in terms of activity and contributions

3. Use Cases, Use Cases, Use Cases

2019 was the year where problems apt for service mesh to solve were identified. Early adopters chose the top-two or -three capabilities they wanted from service mesh and dove in. In the past year, the three most commonly requested solutions have been: 

  • mTLS
  • Observability 
  • Traffic management 

2020 will be the year that core service mesh use cases emerge and are used as models for the next wave of adopters to implement service mesh solutions. 

The top three uses cases that our customers ask for are:

  • Observability to better understand cluster status, quickly debug and more deeply understand systems to architect more resilient and stable systems moving forward
  • Leveraging service mesh policy to drive intended application behaviors
  • Enforcing and proving a secure and compliant environment
  • Technologies like WASM making it possible to distribute existing functionality to dataplane sidecars, as well as build new intelligence and programmability

If you are already using a service mesh, you understand the value it brings. If you’re considering a service mesh, pay close attention to this space and the number of uses cases will make the real-world value proposition clearer in the year ahead. At Aspen Mesh, we’re always happy to talk about service mesh, the best path to implementation and how our customers are solving problems. Feel free to reach out!

Service Mesh For App Owners

Service Mesh for App Owners

How Service Mesh Can Benefit Your Applications

You’ve heard the buzz about service mesh, and if you're like most App Owners, that means you have a lot of questions. Is it something that will be worthwhile for your company to adopt? What are business outcomes service mesh provides? Can it help you better manage your microservices? What are some measurements of success to think about when you’re considering or using service mesh?

To start with, here are five key considerations for evaluating service mesh:

  1. Consider how a service mesh supports your organization's strategic vision and objectives
  2. Have someone in your organization take inventory of your technical requirements and your current systems
  3. Identify resources needed (internal or external) for implementation – all the way through to running your service mesh
  4. Consider how timing, cost and expertise will impact the success of your service mesh implementation
  5. Design a plan to implement, run, measure and improve over time

Business Outcomes From a Service Mesh

As an App Owner, you’re ultimately on the hook for business outcomes at your company. When you're considering adding new tech to your stack, consider your strategies first. What do you plan to accomplish, and how do you intend to make those accomplishments become a reality? 

Whatever your answers may be, if you're using microservices, a service mesh is worth investigating. It has the potential to help you get from where you are to where you want to be -- more securely, and faster.

But apart from just reaching your goals faster and more securely, a service mesh can offer a lot of additional benefits. Here are a few:

  • Decreasing risk
  • Optimizing cost
  • Driving better application behavior
  • Progressive delivery 
  • Gaining a competitive advantage

Decreasing Risk

Risk analysis. Security. Compliance. These topics are priority one, if you want to stay out of the news. But a service mesh can help to provide your company with better -- and provable -- security and compliance.

Security & Compliance

Everyone’s asking a good question: What does it take to achieve security in cloud native environments?

We know that there are a lot of benefits in cloud-native architectures: greater scalability, resiliency and separation of concerns. But new patterns also bring new challenges like ephemerality and new security threats.

With an enterprise service mesh, you get access to observability into security status, end-to-end encryption, compliance features and more. Here are a few security features you can expect from a service mesh:

  • mTLS status at-a-glance: Easily understand the security posture of every service in your cluster
  • Incremental mTLS: Control exactly what’s encrypted in your cluster at the service or namespace level
  • Fine-grained RBAC: Enforce the level of least privilege to ensure your organization does not create a security concern
  • Egress control: Understand and control exactly what your services are talking to outside your clusters

Optimizing Cost

Every business needs cost optimizations. How do you choose which are going to make an impact and which aren’t? Which are most important? Which are you going to use?

As you know, one aspect to consider is talent. Your business does better when your people are working on new features and functionality rather than spending too much of their time on bug fixes. Applications, like service mesh, can help boost your development team’s productivity, allowing them to spend more time working on new business value adds and differentiators rather than bug fixes and maintenance.

But internal resources aren’t the only thing to consider. Without end-users, your company wouldn’t exist. It’s becoming increasingly important to provide a better user experience for both your stakeholders as well as your customers.

A service mesh provides help to applications running on microservice architectures rather than monolithic architectures. Microservices natively make it easier to build and maintain applications, greater agility, faster time to market and more uptime.

A service mesh can help you get the ideal mix of these cost savings and uptime.

Driving Better Application Behavior 

What happens when a new application wants to be exposed to the internet? You need to consider how to secure it, how to integrate it into your existing user-facing APIs, how you'll upgrade it and a host of other concerns. You're embracing microservices, so you might be doing this thing a lot. You want to drive better application behavior. Our advice here? You should use a service mesh policy framework to do this consistently, organization-wide.

Policy is simply a term for describing the way a system responds when something happens. A service mesh can help you improve your company’s policies by allowing you to: 

  1. Provide a clean interface specification between application teams who make new functionality and the platform operators who make it impactful to your users
  2. Make disparate microservices act as a resilient system through controlling how services communicate with each other and external systems and managing it through a single control plane
  3. Allow engineers to easily implement policies that can be mapped to application behavior outcomes, making it easy to ensure great end user experiences

An enterprise service mesh like Aspen Mesh enables each subject-matter expert in your organization to specify policies that enable you to get the intended behavior out of your applications and easily understand what that behavior will be. You can specify, from a business objective level, how you want your application to respond when something happens and use your service mesh to implement that.

Progressive Delivery

Continuous delivery has been a driving force behind software development, testing and deployment for years, and CI/CD best-practices are evolving with the advent of new technologies like Kubernetes and Istio. Progressive delivery, a term coined by James Governor, is a new approach to continuous delivery that includes “a new basket of skills and technologies… such as canarying, feature flags, [and] A/B testing at scale”.  

Progressive delivery decouples LOB and IT by allowing the business to say when it’s acceptable for new code to hit the customer. This means that the business can put guardrails around the customer experience through decoupling dev cycles and service activation. 

With progressive delivery:

  • Deployment is not the same as release
  • Service activation is not the same as deployment
  • The developer can deploy a service, you can ship the service, but that doesn't mean you're activating it for all users

Progressive delivery provides a better developer experience and also allows you to limit the blast radius of new deployments with feature flags, canary deploys and traffic mirroring. 

Gaining A Competitive Advantage

To stay ahead of your competition, you need an edge. Many sizes of companies across industries benefit from microservices or a service mesh. Enterprise companies evaluating or using a service mesh come in lots of different flavors -- those who are just starting, going through or those who have completed a digital transformation, companies shifting from monoliths to microservices, and even organizations using microservices who are working to  identify areas for improvement. 

Service Mesh Success Measurements

How do you plan to measure success with your service mesh? Since service mesh is new and evolving, it can be difficult to know what to look for in order to get a real pulse on how well it’s working for your company.

Start by asking some questions like these:

  1. Saving Resources: Is your team is more efficient with a service mesh? How much more time are they able to spend on feature and function developments rather than bug fixes and maintenance? 
  2. Your Users' Experience: Do you have a complete picture of your customers' experience and know the most valuable places to improve? How much more successful are deployments to production?
  3. Increasing Efficiency: How much time do you spend figuring out which microservice is causing an issue? Does your service mesh save you time here?

These are just a few ways to think about how your service mesh is working for you, as well as a built-in way to identify areas to improve over time. As with any really useful application, it's not just a one-and-done implementation. You'll have greater success by integrating measurement, iteration and improvement into your digital transformation and service mesh strategies.

Interested in learning more about service mesh? Check out the eBook Getting the Most Out of Your Service Mesh.

What a Service Mesh Provides

If you’re like most people with a finger in the tech-world pie, you’ve heard of a service mesh. And you know what a service mesh is. And now you’re wondering what it can solve for you.

A service mesh is an infrastructure layer for microservices applications that can help reduce the complexity of managing microservices and deployments by handling infrastructure service communication quickly, securely and reliably. Service meshes are great at solving operational challenges and issues when running containers and microservices because they provide a uniform way to secure, connect and monitor microservices. 

A good service mesh keeps your company’s services running they way they should, giving you and your team access to the powerful tools that you need — plus access to engineering and support — so you can focus on adding the most value to your business.

Want to learn more about this? Check out the free Complete Guide to Service Mesh.

Next, let’s dive into three key areas where a service mesh can really help: observability, security and operational control.


Are you interested in taking your system monitoring a step further? A service mesh provides monitoring plus observability. While monitoring reports overall system health, observability focuses on highly granular insights into the behavior of systems along with rich context.

Deep System Insights

Kubernetes seemed like the way to rapid iteration and quick development sprints, but the promise and the reality of managing containerized applications at scale are two very different things.

Service mesh - Observability

Docker and Kubernetes enable you to more easily build and deploy apps. But it’s often difficult to understand how those apps are behaving once deployed. So, a service mesh provides tracing and telemetry metrics that make it easy to understand your system and quickly root cause any problems.

An Intuitive UI

A service mesh is uniquely positioned to gather a trove of important data from your services. The sidecar approach places an Envoy sidecar next to every pod in your cluster, which then surfaces telemetry data up to the Istio control plane. This is great, but it also means a mesh will gather more data than is useful. The key is surfacing only the data you need to confirm the health and security status of your services. A good UI solves this problem, and it also lowers the bar on the engineering team, making it easier for more members of the team to understand and control the services in your organization’s architecture.


A service mesh provides security features aimed at securing the services inside your network and quickly identifying any compromising traffic entering your cluster. A service mesh can help you more easily manage security through mTLS, ingress and egress control, and more.

mTLS and Why it Matters

Securing microservices is hard. There are a multitude of tools that address microservices security, but service mesh is the most elegant solution for addressing encryption of on-the-wire traffic within the network.

Service mesh - Security

Service mesh provides defense with mutual TLS (mTLS) encryption of the traffic between your services. The mesh can automatically encrypt and decrypt requests and responses, removing that burden from the application developer. It can also improve performance by prioritizing the reuse of existing, persistent connections, reducing the need for the computationally expensive creation of new ones. With service mesh, you can secure traffic over the wire and also make strong identity-based authentication and authorizations for each microservice.

We see a lot of value in this for enterprise companies. With a good service mesh, you can see whether mTLS is enabled and working between each of your services and get immediate alerts if security status changes.

Ingress & Egress Control

Service mesh adds a layer of security that allows you to monitor and address compromising traffic as it enters the mesh. Istio integrates with Kubernetes as an ingress controller and takes care of load balancing for ingress. This allows you to add a level of security at the perimeter with ingress rules. Egress control allows you to see and manage external services and control how your services interact with them.

Operational Control

A service mesh allows security and platform teams to set the right macro controls to enforce access controls, while allowing developers to make customizations they need to move quickly within these guardrails.


A strong Role Based Access Control (RBAC) system is arguably one of the most critical requirements in large engineering organizations, since even the most secure system can be easily circumvented by overprivileged users or employees. Restricting privileged users to least privileges necessary to perform job responsibilities, ensuring access to systems are set to “deny all” by default, and ensuring proper documentation detailing roles and responsibilities are in place is one of the most critical security concerns in the enterprise.

Service Mesh - Operational Control

We’ve worked to solve this challenge by providing Istio Vet, which is designed to warn you of incorrect or incomplete configuration of your service mesh, and provide guidance to fix it. Istio Vet prevents misconfigurations by refusing to allow them in the first place. Global Istio configuration resources require a different solution, which is addressed by the Traffic Claim Enforcer solution.

The Importance of Policy Frameworks

As companies embrace DevOps and microservice architectures, their teams are moving more quickly and autonomously than ever before. The result is a faster time to market for applications, but more risk to the business. The responsibility of understanding and managing the company’s security and compliance needs is now shifted left to teams that may not have the expertise or desire to take on this burden.

Service mesh makes it easy to control policy and understand how policy settings will affect application behavior. In addition, analytics insights help you get the most out of policy through monitoring, vetting and policy violation analytics so you can quickly understand the best actions to take.

Policy frameworks allow you to securely and efficiently deploy microservices applications while limiting risk and unlocking DevOps productivity. Key to this innovation is the ability to synthesize business-level goals, regulatory or legal requirements, operational metrics, and team-level rules into high performance service mesh policy that sits adjacent to every application.

A good service mesh keeps your company’s services running they way they should, giving you observability, security and operational control plus access to engineering and support, so you are free to focus on adding more value to your business.

If you’d like to learn more about this, get your free copy of the Complete Guide to Service Mesh here.



Aspen Mesh - Getting the Most Out of Your Service Mesh

How to Get the Most Out of Your Service Mesh

You’ve been hearing about service mesh. You have an idea of what it does and how it can help you manage your microservices. But what happens once you have one? How do you get as much out of it as you can?

Let’s start with a quick review of what a service mesh is, why you would need one, then move on to how to get the most out of your service mesh.

What's a Service Mesh?

  1. A transparent infrastructure layer that sits between your network and application, helping with communications between your microservices

  2. Could be your next game changing decision

A service mesh is designed to handle a high volume of service-to-service communication using application programming interfaces (APIs). It ensures that communication among containerized application services is fast, reliable and secure. The mesh provides critical capabilities including service discovery, load balancing, encryption, observability, traceability, authentication and authorization, and write-once, run anywhere policy for microservices in your Kubernetes clusters.

Service meshes also address challenges that arise when your application is being consumed by an end user. The first key capability is monitoring the health of services provided to the end user, and then tracing problems with that health quickly to the correct microservice. Next, you'll need to ensure communication is secure and resilient.

When Do You Need a Service Mesh?

We’ve been having lots of discussions with people spread across the microservices, Kubernetes and service mesh adoption curves. And while it’s clear that many enterprise organizations are at least considering microservices, many are still waiting to see best practices emerge before deciding on their own path forward. That means the landscape changes as needs are evolving. 

As an example, more organizations are looking to microservices for brownfield deployments, whereas – even a couple of years ago – almost everyone only considered building microservices architectures for greenfield. This tells us that as microservices technology and tooling continues to evolve, it’s becoming more feasible for non-unicorn companies to effectively and efficiently decompose the monolith into microservices. 

Think about it this way: in the past six months, the top three reasons we’ve heard people say they want to implement service mesh are:

  1. Observability – to better understand the behavior of Kubernetes clusters 
  2. mTLS – to add cluster-wide service encryption
  3. Distributed Tracing – to simplify debugging and speed up root cause analysis

Gauging the current state of the cloud-native infrastructure space, there’s no doubt that there’s still more exploration and evaluation of tools like Kubernetes and Istio. But the gap is definitely closing. Companies are closely watching the leaders in the space to see how they are implementing and what benefits and challenges they are facing. As more organizations successfully adopt these new technologies, it’s becoming obvious that while there’s a skills gap and new complexity that must be accounted for, the outcomes around increased velocity, better resiliency and improved customer experience mandates that many organizations actively map their own path with microservices. This will help to ensure that they are not left behind by the market leaders in their space.

Getting the Most Out of Your Service Mesh

Aspen Mesh - Getting the Most Out of Your Service MeshIn order to really stay ahead of the competition, you need to know best practices about getting the most out of your service mesh, recommendations from industry experts about how to measure your success, and ways to think about how to keep getting even more out of your technology.

But what do you want out of a service mesh? Since you’re reading this, there’s a good chance you’re responsible for making sure that your end users get the most out of your applications. That’s probably why you started down the microservices path in the first place.

If that’s true, then you’ve probably realized that microservices come with their own unique challenges, such as:

  • Increased surface area that can be attacked
  • Polyglot challenges
  • Controlling access for distributed teams developing towards a single application

That’s where a service mesh comes in. Service meshes are great at solving operational challenges and issues when running containers and microservices because they provide a uniform way to secure, connect and monitor microservices. 

TL;DR a good service mesh keeps your company’s services running they way they should, giving you the observability, security and traffic management capabilities you need to effectively manage and control containerized applications so you can focus on adding the most value to your business.

When Service Mesh is a Win/Win

Service mesh is an application that can help entire organizations work together for better outcomes. In other words, service mesh is the ultimate DevOps enabler.

Here are a few highlights of the value a service mesh provides across teams:

  • Observability: take system monitoring a step further by providing observability. Monitoring reports overall system health, while observability focuses on highly granular insights into the behavior of systems along with rich context
  • Security and Decreased Risk: better secure the services inside your network and quickly identify any compromising traffic entering your clusters
  • Operational Control: allow security and platform teams to set the right macro controls to enforce access controls, while allowing developers to make customizations they need to move quickly within defined guardrails
  • Increase Efficiency with a Developer Toolbox: remove the burden of managing infrastructure from the developer and provide developer-friendly features such as distributed tracing and easy canary deploys 

What’s the Secret to Getting the Most Out of Your Service Mesh?

There are a lot of things you can do to get more out of your service mesh. Here are three high level tactics to start with:

  1. Align on service mesh goals with your teams
  2. Choose the service mesh that can be broadly deployed to address your company's needs
  3. Measure your service mesh success over time in order to identify and make improvements

Still looking for more info about this? Check out the eBook: Getting the Most Out of Your Service Mesh.

Complete this form to get your copy of the eBook Getting the Most Out of Your Service Mesh: