Top 3 Service Mesh Developments in 2020

In 2019, we saw service mesh move beyond an experimental technology and into a solution that organizations are beginning to learn is an elemental building block for any successful Kubernetes deployment. Adoption of service mesh at scale, across companies large and small, began to gain steam. As the second wave of adopters watched the cutting edge adopters trial and succeed with service mesh technology, they too began to evaluate service mesh to address the challenges Kubernetes leaves on the table. 

In tandem with growing adoption of service mesh, 2019 offered a burgeoning service mesh market. Istio and Linkerd keep chugging along, and the tooling and vendor ecosystem around Istio almost tripled throughout the year. But there were also many new players that entered the market providing alternative approaches to solving layer 7 networking challenges. Meshes, such as those Kuma and Maesh offer, have emerged to provide different approaches to service mesh in order to address various edge use cases. We also saw the introduction of tools like SMI Spec and Meshery attempt to engage an early market that is flourishing due to immense opportunity, but has yet to contract while key players are waiting for the market to choose the winners first. Adjacent projects like Network Service Mesh bring service mesh principles to lower layers of the stack.

While there is still much to be settled in the service mesh space, the value of service mesh as a technology pattern is clear, as evidenced by the recently released “Voice of the Enterprise: DevOps,” 1H2019 survey conducted by 451 Research

While still a nascent market, the interest in and plan to adopt service mesh as a critical piece of infrastructure is quickly catching up to that of Kubernetes and containers. 

Service Mesh in 2020: The Top-3 Developments 

1. A quickly growing need for service mesh

Kubernetes is exploding. It has become the preferred choice for container orchestration in the enterprise and in greenfield deployments. There are real challenges that are causing brownfield to lag behind, but those are being explored and solved. Yes, Kubernetes is a nascent technology. And yes, much of the world is years away from adopting it. But it’s clear that Kubernetes has become--and will continue to be--a dominant force in the world of software. 

If Kubernetes has won and the scale and complexity of Kubernetes-based applications will increase, there is a tipping point where service mesh becomes all but required to effectively manage those applications. 

2. Istio Will Be Hard to Beat

There’s likely room for a few other contenders in the market, but we will see the market consolidation begin in 2020. In the long term, it’s probable that we’ll see a Kubernetes-like situation where a winner emerges and companies begin to standardize around that winner. It’s conceivable that service mesh may not be the technology pattern that is picked to solve layer 7 networking issues. But if that does happen, it seems likely that Istio becomes the de facto service mesh. There are many arguments for and against this, but the most telling factor is the ecosystem developing around Istio. Almost every major software vendor has an Istio solution or integration, and the Istio open source community far surpasses any others in terms of activity and contributions

3. Use Cases, Use Cases, Use Cases

2019 was the year where problems apt for service mesh to solve were identified. Early adopters chose the top-two or -three capabilities they wanted from service mesh and dove in. In the past year, the three most commonly requested solutions have been: 

  • mTLS
  • Observability 
  • Traffic management 

2020 will be the year that core service mesh use cases emerge and are used as models for the next wave of adopters to implement service mesh solutions. 

The top three uses cases that our customers ask for are:

  • Observability to better understand cluster status, quickly debug and more deeply understand systems to architect more resilient and stable systems moving forward
  • Leveraging service mesh policy to drive intended application behaviors
  • Enforcing and proving a secure and compliant environment
  • Technologies like WASM making it possible to distribute existing functionality to dataplane sidecars, as well as build new intelligence and programmability

If you are already using a service mesh, you understand the value it brings. If you’re considering a service mesh, pay close attention to this space and the number of uses cases will make the real-world value proposition clearer in the year ahead. At Aspen Mesh, we’re always happy to talk about service mesh, the best path to implementation and how our customers are solving problems. Feel free to reach out!

Service Mesh For App Owners

Service Mesh for App Owners

How Service Mesh Can Benefit Your Applications

You’ve heard the buzz about service mesh, and if you're like most App Owners, that means you have a lot of questions. Is it something that will be worthwhile for your company to adopt? What are business outcomes service mesh provides? Can it help you better manage your microservices? What are some measurements of success to think about when you’re considering or using service mesh?

To start with, here are five key considerations for evaluating service mesh:

  1. Consider how a service mesh supports your organization's strategic vision and objectives
  2. Have someone in your organization take inventory of your technical requirements and your current systems
  3. Identify resources needed (internal or external) for implementation – all the way through to running your service mesh
  4. Consider how timing, cost and expertise will impact the success of your service mesh implementation
  5. Design a plan to implement, run, measure and improve over time

Business Outcomes From a Service Mesh

As an App Owner, you’re ultimately on the hook for business outcomes at your company. When you're considering adding new tech to your stack, consider your strategies first. What do you plan to accomplish, and how do you intend to make those accomplishments become a reality? 

Whatever your answers may be, if you're using microservices, a service mesh is worth investigating. It has the potential to help you get from where you are to where you want to be -- more securely, and faster.

But apart from just reaching your goals faster and more securely, a service mesh can offer a lot of additional benefits. Here are a few:

  • Decreasing risk
  • Optimizing cost
  • Driving better application behavior
  • Progressive delivery 
  • Gaining a competitive advantage

Decreasing Risk

Risk analysis. Security. Compliance. These topics are priority one, if you want to stay out of the news. But a service mesh can help to provide your company with better -- and provable -- security and compliance.

Security & Compliance

Everyone’s asking a good question: What does it take to achieve security in cloud native environments?

We know that there are a lot of benefits in cloud-native architectures: greater scalability, resiliency and separation of concerns. But new patterns also bring new challenges like ephemerality and new security threats.

With an enterprise service mesh, you get access to observability into security status, end-to-end encryption, compliance features and more. Here are a few security features you can expect from a service mesh:

  • mTLS status at-a-glance: Easily understand the security posture of every service in your cluster
  • Incremental mTLS: Control exactly what’s encrypted in your cluster at the service or namespace level
  • Fine-grained RBAC: Enforce the level of least privilege to ensure your organization does not create a security concern
  • Egress control: Understand and control exactly what your services are talking to outside your clusters

Optimizing Cost

Every business needs cost optimizations. How do you choose which are going to make an impact and which aren’t? Which are most important? Which are you going to use?

As you know, one aspect to consider is talent. Your business does better when your people are working on new features and functionality rather than spending too much of their time on bug fixes. Applications, like service mesh, can help boost your development team’s productivity, allowing them to spend more time working on new business value adds and differentiators rather than bug fixes and maintenance.

But internal resources aren’t the only thing to consider. Without end-users, your company wouldn’t exist. It’s becoming increasingly important to provide a better user experience for both your stakeholders as well as your customers.

A service mesh provides help to applications running on microservice architectures rather than monolithic architectures. Microservices natively make it easier to build and maintain applications, greater agility, faster time to market and more uptime.

A service mesh can help you get the ideal mix of these cost savings and uptime.

Driving Better Application Behavior 

What happens when a new application wants to be exposed to the internet? You need to consider how to secure it, how to integrate it into your existing user-facing APIs, how you'll upgrade it and a host of other concerns. You're embracing microservices, so you might be doing this thing a lot. You want to drive better application behavior. Our advice here? You should use a service mesh policy framework to do this consistently, organization-wide.

Policy is simply a term for describing the way a system responds when something happens. A service mesh can help you improve your company’s policies by allowing you to: 

  1. Provide a clean interface specification between application teams who make new functionality and the platform operators who make it impactful to your users
  2. Make disparate microservices act as a resilient system through controlling how services communicate with each other and external systems and managing it through a single control plane
  3. Allow engineers to easily implement policies that can be mapped to application behavior outcomes, making it easy to ensure great end user experiences

An enterprise service mesh like Aspen Mesh enables each subject-matter expert in your organization to specify policies that enable you to get the intended behavior out of your applications and easily understand what that behavior will be. You can specify, from a business objective level, how you want your application to respond when something happens and use your service mesh to implement that.

Progressive Delivery

Continuous delivery has been a driving force behind software development, testing and deployment for years, and CI/CD best-practices are evolving with the advent of new technologies like Kubernetes and Istio. Progressive delivery, a term coined by James Governor, is a new approach to continuous delivery that includes “a new basket of skills and technologies… such as canarying, feature flags, [and] A/B testing at scale”.  

Progressive delivery decouples LOB and IT by allowing the business to say when it’s acceptable for new code to hit the customer. This means that the business can put guardrails around the customer experience through decoupling dev cycles and service activation. 

With progressive delivery:

  • Deployment is not the same as release
  • Service activation is not the same as deployment
  • The developer can deploy a service, you can ship the service, but that doesn't mean you're activating it for all users

Progressive delivery provides a better developer experience and also allows you to limit the blast radius of new deployments with feature flags, canary deploys and traffic mirroring. 

Gaining A Competitive Advantage

To stay ahead of your competition, you need an edge. Many sizes of companies across industries benefit from microservices or a service mesh. Enterprise companies evaluating or using a service mesh come in lots of different flavors -- those who are just starting, going through or those who have completed a digital transformation, companies shifting from monoliths to microservices, and even organizations using microservices who are working to  identify areas for improvement. 

Service Mesh Success Measurements

How do you plan to measure success with your service mesh? Since service mesh is new and evolving, it can be difficult to know what to look for in order to get a real pulse on how well it’s working for your company.

Start by asking some questions like these:

  1. Saving Resources: Is your team is more efficient with a service mesh? How much more time are they able to spend on feature and function developments rather than bug fixes and maintenance? 
  2. Your Users' Experience: Do you have a complete picture of your customers' experience and know the most valuable places to improve? How much more successful are deployments to production?
  3. Increasing Efficiency: How much time do you spend figuring out which microservice is causing an issue? Does your service mesh save you time here?

These are just a few ways to think about how your service mesh is working for you, as well as a built-in way to identify areas to improve over time. As with any really useful application, it's not just a one-and-done implementation. You'll have greater success by integrating measurement, iteration and improvement into your digital transformation and service mesh strategies.

Interested in learning more about service mesh? Check out the eBook Getting the Most Out of Your Service Mesh.

What a Service Mesh Provides

If you’re like most people with a finger in the tech-world pie, you’ve heard of a service mesh. And you know what a service mesh is. And now you’re wondering what it can solve for you.

A service mesh is an infrastructure layer for microservices applications that can help reduce the complexity of managing microservices and deployments by handling infrastructure service communication quickly, securely and reliably. Service meshes are great at solving operational challenges and issues when running containers and microservices because they provide a uniform way to secure, connect and monitor microservices. 

A good service mesh keeps your company’s services running they way they should, giving you and your team access to the powerful tools that you need — plus access to engineering and support — so you can focus on adding the most value to your business.

Want to learn more about this? Check out the free Complete Guide to Service Mesh.

Next, let’s dive into three key areas where a service mesh can really help: observability, security and operational control.


Are you interested in taking your system monitoring a step further? A service mesh provides monitoring plus observability. While monitoring reports overall system health, observability focuses on highly granular insights into the behavior of systems along with rich context.

Deep System Insights

Kubernetes seemed like the way to rapid iteration and quick development sprints, but the promise and the reality of managing containerized applications at scale are two very different things.

Service mesh - Observability

Docker and Kubernetes enable you to more easily build and deploy apps. But it’s often difficult to understand how those apps are behaving once deployed. So, a service mesh provides tracing and telemetry metrics that make it easy to understand your system and quickly root cause any problems.

An Intuitive UI

A service mesh is uniquely positioned to gather a trove of important data from your services. The sidecar approach places an Envoy sidecar next to every pod in your cluster, which then surfaces telemetry data up to the Istio control plane. This is great, but it also means a mesh will gather more data than is useful. The key is surfacing only the data you need to confirm the health and security status of your services. A good UI solves this problem, and it also lowers the bar on the engineering team, making it easier for more members of the team to understand and control the services in your organization’s architecture.


A service mesh provides security features aimed at securing the services inside your network and quickly identifying any compromising traffic entering your cluster. A service mesh can help you more easily manage security through mTLS, ingress and egress control, and more.

mTLS and Why it Matters

Securing microservices is hard. There are a multitude of tools that address microservices security, but service mesh is the most elegant solution for addressing encryption of on-the-wire traffic within the network.

Service mesh - Security

Service mesh provides defense with mutual TLS (mTLS) encryption of the traffic between your services. The mesh can automatically encrypt and decrypt requests and responses, removing that burden from the application developer. It can also improve performance by prioritizing the reuse of existing, persistent connections, reducing the need for the computationally expensive creation of new ones. With service mesh, you can secure traffic over the wire and also make strong identity-based authentication and authorizations for each microservice.

We see a lot of value in this for enterprise companies. With a good service mesh, you can see whether mTLS is enabled and working between each of your services and get immediate alerts if security status changes.

Ingress & Egress Control

Service mesh adds a layer of security that allows you to monitor and address compromising traffic as it enters the mesh. Istio integrates with Kubernetes as an ingress controller and takes care of load balancing for ingress. This allows you to add a level of security at the perimeter with ingress rules. Egress control allows you to see and manage external services and control how your services interact with them.

Operational Control

A service mesh allows security and platform teams to set the right macro controls to enforce access controls, while allowing developers to make customizations they need to move quickly within these guardrails.


A strong Role Based Access Control (RBAC) system is arguably one of the most critical requirements in large engineering organizations, since even the most secure system can be easily circumvented by overprivileged users or employees. Restricting privileged users to least privileges necessary to perform job responsibilities, ensuring access to systems are set to “deny all” by default, and ensuring proper documentation detailing roles and responsibilities are in place is one of the most critical security concerns in the enterprise.

Service Mesh - Operational Control

We’ve worked to solve this challenge by providing Istio Vet, which is designed to warn you of incorrect or incomplete configuration of your service mesh, and provide guidance to fix it. Istio Vet prevents misconfigurations by refusing to allow them in the first place. Global Istio configuration resources require a different solution, which is addressed by the Traffic Claim Enforcer solution.

The Importance of Policy Frameworks

As companies embrace DevOps and microservice architectures, their teams are moving more quickly and autonomously than ever before. The result is a faster time to market for applications, but more risk to the business. The responsibility of understanding and managing the company’s security and compliance needs is now shifted left to teams that may not have the expertise or desire to take on this burden.

Service mesh makes it easy to control policy and understand how policy settings will affect application behavior. In addition, analytics insights help you get the most out of policy through monitoring, vetting and policy violation analytics so you can quickly understand the best actions to take.

Policy frameworks allow you to securely and efficiently deploy microservices applications while limiting risk and unlocking DevOps productivity. Key to this innovation is the ability to synthesize business-level goals, regulatory or legal requirements, operational metrics, and team-level rules into high performance service mesh policy that sits adjacent to every application.

A good service mesh keeps your company’s services running they way they should, giving you observability, security and operational control plus access to engineering and support, so you are free to focus on adding more value to your business.

If you’d like to learn more about this, get your free copy of the Complete Guide to Service Mesh here.



Aspen Mesh - Getting the Most Out of Your Service Mesh

How to Get the Most Out of Your Service Mesh

You’ve been hearing about service mesh. You have an idea of what it does and how it can help you manage your microservices. But what happens once you have one? How do you get as much out of it as you can?

Let’s start with a quick review of what a service mesh is, why you would need one, then move on to how to get the most out of your service mesh.

What's a Service Mesh?

  1. A transparent infrastructure layer that sits between your network and application, helping with communications between your microservices

  2. Could be your next game changing decision

A service mesh is designed to handle a high volume of service-to-service communication using application programming interfaces (APIs). It ensures that communication among containerized application services is fast, reliable and secure. The mesh provides critical capabilities including service discovery, load balancing, encryption, observability, traceability, authentication and authorization, and write-once, run anywhere policy for microservices in your Kubernetes clusters.

Service meshes also address challenges that arise when your application is being consumed by an end user. The first key capability is monitoring the health of services provided to the end user, and then tracing problems with that health quickly to the correct microservice. Next, you'll need to ensure communication is secure and resilient.

When Do You Need a Service Mesh?

We’ve been having lots of discussions with people spread across the microservices, Kubernetes and service mesh adoption curves. And while it’s clear that many enterprise organizations are at least considering microservices, many are still waiting to see best practices emerge before deciding on their own path forward. That means the landscape changes as needs are evolving. 

As an example, more organizations are looking to microservices for brownfield deployments, whereas – even a couple of years ago – almost everyone only considered building microservices architectures for greenfield. This tells us that as microservices technology and tooling continues to evolve, it’s becoming more feasible for non-unicorn companies to effectively and efficiently decompose the monolith into microservices. 

Think about it this way: in the past six months, the top three reasons we’ve heard people say they want to implement service mesh are:

  1. Observability – to better understand the behavior of Kubernetes clusters 
  2. mTLS – to add cluster-wide service encryption
  3. Distributed Tracing – to simplify debugging and speed up root cause analysis

Gauging the current state of the cloud-native infrastructure space, there’s no doubt that there’s still more exploration and evaluation of tools like Kubernetes and Istio. But the gap is definitely closing. Companies are closely watching the leaders in the space to see how they are implementing and what benefits and challenges they are facing. As more organizations successfully adopt these new technologies, it’s becoming obvious that while there’s a skills gap and new complexity that must be accounted for, the outcomes around increased velocity, better resiliency and improved customer experience mandates that many organizations actively map their own path with microservices. This will help to ensure that they are not left behind by the market leaders in their space.

Getting the Most Out of Your Service Mesh

Aspen Mesh - Getting the Most Out of Your Service MeshIn order to really stay ahead of the competition, you need to know best practices about getting the most out of your service mesh, recommendations from industry experts about how to measure your success, and ways to think about how to keep getting even more out of your technology.

But what do you want out of a service mesh? Since you’re reading this, there’s a good chance you’re responsible for making sure that your end users get the most out of your applications. That’s probably why you started down the microservices path in the first place.

If that’s true, then you’ve probably realized that microservices come with their own unique challenges, such as:

  • Increased surface area that can be attacked
  • Polyglot challenges
  • Controlling access for distributed teams developing towards a single application

That’s where a service mesh comes in. Service meshes are great at solving operational challenges and issues when running containers and microservices because they provide a uniform way to secure, connect and monitor microservices. 

TL;DR a good service mesh keeps your company’s services running they way they should, giving you the observability, security and traffic management capabilities you need to effectively manage and control containerized applications so you can focus on adding the most value to your business.

When Service Mesh is a Win/Win

Service mesh is an application that can help entire organizations work together for better outcomes. In other words, service mesh is the ultimate DevOps enabler.

Here are a few highlights of the value a service mesh provides across teams:

  • Observability: take system monitoring a step further by providing observability. Monitoring reports overall system health, while observability focuses on highly granular insights into the behavior of systems along with rich context
  • Security and Decreased Risk: better secure the services inside your network and quickly identify any compromising traffic entering your clusters
  • Operational Control: allow security and platform teams to set the right macro controls to enforce access controls, while allowing developers to make customizations they need to move quickly within defined guardrails
  • Increase Efficiency with a Developer Toolbox: remove the burden of managing infrastructure from the developer and provide developer-friendly features such as distributed tracing and easy canary deploys 

What’s the Secret to Getting the Most Out of Your Service Mesh?

There are a lot of things you can do to get more out of your service mesh. Here are three high level tactics to start with:

  1. Align on service mesh goals with your teams
  2. Choose the service mesh that can be broadly deployed to address your company's needs
  3. Measure your service mesh success over time in order to identify and make improvements

Still looking for more info about this? Check out the eBook: Getting the Most Out of Your Service Mesh.

Complete this form to get your copy of the eBook Getting the Most Out of Your Service Mesh:

Service Mesh Insider: An Interview with Shawn Wormke

Have you ever wondered how we got to service mesh? What backgrounds, experiences and technologies led to the emergence of service mesh? 

We recently put together an interview with Aspen Mesh’s Founder, Shawn Wormke in order to get the inside scoop for you. Read on to find out the answers to these three questions:

  1. What role did your technical expertise play in how Aspen Mesh focuses on enterprise service mesh?
  2. Describe how your technical and business experience combined to create an enterprise startup and inform your understanding of how to run a “modern” software company?
  3. What characteristics define a “modern” enterprise, and how does Aspen Mesh contribute to making it a reality?

1. What role did your technical expertise play in how Aspen Mesh focuses on the enterprise?

I started my career at Cisco working in network security and firewalls on the ASA product line and later the Firewall Services Module for the Catalyst 6500/7600 series switches and routers. Both of these products were focused on the enterprise at a time when security was starting to move up the stack and become more and more distributed throughout the network. We were watching our customers move from L2 transparent firewalls to L3/L4 firewalls that required application logic in order to “fixup” dynamic ports for protocols like FTP, SIP and H.323. Eventually that journey up the stack continued to L7 firewalls that were doing URL, header and payload inspection to enforce security policy.

At the same time that this move up the stack was happening, customers were starting to look at migrating workloads to VMs and were demanding new form factors and valuing different performance metrics. No longer were speeds, feeds and dragstrip numbers important, the focus was shifting to footprint and elasticity. The result in this shift in priority was a change in mindset when it came to how enterprises were thinking about expenses. They started to think about shifting expenses from large capacity stranding CAPEX purchases to more frequent OPEX transactions that were aligned with a software-first approach.

It was this idea that led me to join as one of the first engineers at a small startup in Boulder, CO called LineRate Systems which was eventually acquired by F5 Networks. The company was founded on a passion for making high performance, lightweight application delivery (aka load balancing) software that was as fast as the industry standard hardware. Our realization was that Commodity Off the Shelf (COTS) hardware had so much performance that if leveraged properly it was possible to offer the same performance at a much lower cost.

But the big idea, the one that ultimately got us noticed by F5, was that if the hardware was freely available (everyone had racks and racks of servers), we could charge our customers for a performance range and let them stamp out the software--as much as they needed--to achieve that. This removed the risk of the transaction from the customer as they no longer had to pre-plan 3-5 years worth of capacity.  It placed the burden on the provider to deliver an efficient and API-first elastic platform and a pricing model that scaled along the same dimensions as their business needs.

After acquisition we started to use containers and eventually Kubernetes for some of our build and test infrastructure. The use of these technologies led us to realize that they were great for increasing velocity and agility, but were difficult to debug and secure. We had no record of what our test containers did or who they talked to at runtime and we had no idea what data they were accessing. If we had a way to make sense of all of this, life would be so much easier.

This led us to work on some internal projects that experimented with ideas that we all now know as service mesh. We even released a product that was the beginning of this called the Application Services Proxy, which we ultimately end-of-lifed in 2017 when we made the decision to create Aspen Mesh.

In 2018 Aspen Mesh was born as an F5 Incubation. It is a culmination of almost 20 years of solving network and security problems for some of the world's largest enterprise customers and ensuring that the form-factor, consumption and pricing models are flexible and grow along with the businesses that use it. It is acknowledgement that disruption is happening everywhere and that an organization’s agility and ability to respond to disruption is it's number one business asset. Companies are realizing this agility by redefining how they deliver value to their customers as quickly as possible using technologies like cloud, containers and Kubernetes.

We know that for enterprises, agility with stability is the number one competitive advantage. Through years of experience working on enterprise products we know that companies who can meet their evolving customer needs--while staying out of the news for downtime and security breaches--will be the winners of tomorrow. Aspen Mesh’s Enterprise Service Mesh enables enterprises to rapidly deliver value to their customers in a performant, secure and compliant way.

2. Describe how your technical and learned business experience combine to build an enterprise startup and inform your understanding of how best to run a “modern” software company?

Throughout my career I have been part of waterfall to agile transformations, worked on products that enabled business agility and now run a team that requires that same flexibility and business agility. We need to be focused on getting product to market that shows value to our customers as quickly as possible. We rely on automation to ensure that we are focusing our human capital on the most important tasks. We rely on data to make our decisions and ensure that the data we have is trustworthy and secure.

The great thing is that we get to be the ones doing the disrupting, and not the ones getting disrupted. What this means is we get to move fast and don’t have the burden of a large enterprise decision-making process. We can be agile and make mistakes, and we are actually expected to make mistakes. We are told "no" more than we are told "yes." But, learning from those failures and making course corrections along the way is key to our success.

Over the years I have come to embrace the power of open source and what it can do to accelerate projects and the impacts (both positive and negative) it can have on your company. I believe that in the future all applications will be born from open technologies. Companies that acknowledge and embrace this will be the most successful in the new cloud-native and open source world. How you choose to do that depends on your business and business model. You can be a consumer of OSS in your SaaS platform, an open-core product, glue multiple projects together to create a product or provide support and services; but if you are not including open source in your modern software company, you will not be successful.

Over the past 10 years we have seen and continue to see consumption models across all verticals rapidly evolve from perpetual NCR-based sales models with annual maintenance contracts to subscription or consumption based models to fully managed SaaS based offerings. I recently read an article on subscription based banking. This is driven from the desire to shift the risk to the producer instead of the consumer. It is a realization by companies that capacity planning for 3-5 years is impossible, and that laying out that cash is a huge risk to the business they are no longer willing to take. It is up to technology producers to provide enough value to attract customers and then continue providing value to them to retain them year over year.

Considering how you are going to offer your product in a way that scales with your customers value matrix and growth plans is critical. This applies to pricing as well as product functionality and performance.

Finally, I would be negligent if I didn’t mention data as a paramount consideration when running a modern software company. Insights derived from that data need to be at the center of everything you do. This goes not only for your product, but also your internal visibility and decision making processes. 

On the product side, when dealing with large enterprises it is critical to understand what your customers are willing to give you and how much value they need to realize in return. An enterprise's first answer will often be “no” when you tell them you need to access their data to run your product, but that just means you haven’t shown them enough value to say "yes." You need to consider what data you need, how much you need of it, where it will be stored and how you are protecting it.

On the internal side you need to measure everything. The biggest challenge I have found with an early-stage, small team is taking the time to enable these measurements. It is easy to drop off the list when you are trying to get new features out the door and you don’t yet know what you're going to do with the data. Resist that urge and force your teams to think about how they can do both, and if necessary take the time to slow down and put it in. Sometimes being thoughtful early on can help you go fast later, and adding hooks to gather and analyze data is one of those times.

Operating a successful modern software company requires you embrace all the cliches about wearing multiple hats and failing fast. It's also critical to focus on being agile, embrace open source, create a consumption based offering, and rely on data, data, data and more data.

3. What characteristics define a “modern” enterprise, and how does Aspen Mesh contribute to making it a reality?

The modern enterprise craves agility and considers it to be their number one business advantage. This agility is what allows the enterprise to deliver value to customers as quickly as possible. This agility is often derived from a greater reliance on technology to enable rapid speed to market. Enterprises are constantly defending against disruption from non-traditional startup companies with seemingly unlimited venture funding and no expectation of profitability. All the while the enterprise is required to compete and deliver value while maintaining the revenue and profitability goals that their shareholders have grown to expect over years of sustained growth. 

In order to remain competitive, enterprises are embracing new business models and looking for new ways to engage their customers through new digital channels. They are relying more on data and analytics to make business decisions and to make their teams and operations more efficient. Modern enterprises are embracing automation to perform mundane repetitive tasks and are turning over their workforce to gain the technical talent that allows them to compete with the smaller upstart disruptors in their space.

But agility without stability can be detrimental to an enterprise. As witnessed by many recent reports, enterprises can struggle with challenges around data and data security, perimeter breaches and downtime. It's easy to get caught up in the promise of the latest new technology, but moving fast and embracing new technology requires careful consideration for how it integrates into your organization, it's security posture and how it scales with your business. Finding a trusted partner to accompany you on your transformation journey is key to long term success.

Aspen Mesh is that technology partner when it comes to delivering next generation application architectures based on containers and Kubernetes. We understand the power and promise of agility and scalability that these technologies offer, but we also know that they introduce a new set of challenges for enterprises. These challenges include securing communication between services, observing and controlling service behavior and problems and managing the policy associated with services across large distributed organizations. 

Aspen Mesh provides a fully supported service mesh that is focused on enterprise use cases that include:

  • An advanced policy framework that allows users to describe business goals that are enforced in the application’s runtime environment
  • Role based policy management that enables organizations to create and apply policies according to their needs
  • A catalog of policies based on industry and security best practices that are created and tested by experts
  • Data analytics-based insights for enhanced troubleshooting and debugging
  • Predictive analytics to help teams detect and mitigate problems before they happen
  • Streamlined application deployment packages that provide a uniform approach to authentication and authorization, secure communications, and ingress and egress control
  • DevOps tools and workflow integration
  • A simplified user experience with improved organization and streamlined navigation to enable users to quickly find and mitigate failures and security issues
  • A consistent view of applications across multiple clouds to allow visibility from a global perspective to a hyper-local level
  • Graph visualizations of application relationships that enable teams to collaborate seamlessly on focused subsets of their infrastructure
  • Tabular representations surfacing details to find and remediate issues across multiple clusters running dozens or hundreds of services
  • A reduced-risk scalable consumption model that allows customers to pay as they grow

Thanks for reading! We hope that helps shed some light on what goes on behind the scenes at Aspen Mesh. And if you liked this post, feel free to subscribe to our blog in order to get updates when new articles are released.

Understanding Service Mesh

The Origin of Service Mesh

In the beginning, we had packets and packet-switched networks.

Everyone on the Internet — all 30 of them — used packets to build addressing, session establishment/teardown. And then, they’d need a retransmission scheme. Then, they’d build an ordered byte stream out of it.

Eventually, they realized they had all built the same thing. The RFCs for IP and TCP standardized this, operating systems provided a TCP/IP stack, so no application ever had to turn a best-effort packet network into a reliable byte stream.

We took our reliable byte streams and used them to make applications. Turns out that a lot of those applications had common patterns again — they requested things from servers, and then got responses. So, we separated these request/responses into metadata (headers) and body.

HTTP standardized the most widely deployed request/response protocol. Same story. App developers don't have to implement the mechanics of requests and responses. They can focus on the app on top.

There's a newer set of functionality that you need to build a reliable microservices application. Service discovery, versioning, zero-trust.... all the stuff popularized by the Netflix architecture, by 12-factor apps, etc. We see the same thing happening again - an emerging set of best practices that you have to build into each microservice to be successful.

So, service mesh is about putting all that functionality again into a layer, just like HTTP, TCP, packets, that's underneath your code, but creating a network for services rather than bytes.

Questions? Download The Complete Guide to Service Mesh or keep reading to find out more about what exactly a service mesh is.

What Is A Service Mesh?

A service mesh is a transparent infrastructure layer that sits between your network and application.

It’s designed to handle a high volume of service-to-service communications using application programming interfaces (APIs). A service mesh ensures that communication among containerized application services is fast, reliable and secure.

The mesh provides critical capabilities including service discovery, load balancing, encryption, observability, traceability, authentication and authorization, and the ability to control policy and configuration in your Kubernetes clusters.

Service mesh helps address many of the challenges that arise when your application is being consumed by the end user. Being able to monitor what services are communicating with each other, if those communications are secure, and being able to control the service-to-service communication in your clusters are key to ensuring applications are running securely and resiliently.

More Efficiently Managing Microservices

The self-contained, ephemeral nature of microservices comes with some serious upside, but keeping track of every single one is a challenge — especially when trying to figure out how the rest are affected when a single microservice goes down. The end result is that if you’re operating or developing in a microservices architecture, there’s a good chance part of your days are spent wondering what the hell your services are up to.

With the adoption of microservices, problems also emerge due to the sheer number of services that exist in large systems. Problems like security, load balancing, monitoring and rate limiting that had to be solved once for a monolith, now have to be handled separately for each service.

Service mesh helps address many of these challenges so engineering teams, and businesses, can deliver applications more quickly and securely.

Why You Might Care

If you’re reading this, you’re probably responsible for making sure that you and your end users get the most out of your applications and services. In order to do that, you need to have the right kind of access, security and support. That’s probably why you started down the microservices path.

If that’s true, then you’ve probably realized that microservices come with their own unique challenges, such as:

  1. Increased surface area that can be attacked
  2. Polyglot challenges
  3. Controlling access for distributed teams developing on a single application

That’s where a service mesh comes in.

A service mesh is an infrastructure layer for microservices applications that can help reduce the complexity of managing microservices and deployments by handling infrastructure service communication quickly, securely and reliably. 

Service meshes are great at solving operational challenges and issues when running containers and microservices because they provide a uniform way to secure, connect and monitor microservices. 

Here’s the point: a good service mesh keeps your company’s services running they way they should. A service mesh designed for the enterprise, like Aspen Mesh, gives you all the observability, security and traffic management you need — plus access to engineering and support, so you can focus on adding the most value to your business.

And that is good news for DevOps.

The Rise of DevOps - and How Service Mesh Is Enabling It

It’s happening, and it’s happening fast.

Companies are transforming internal orgs and product architectures along a new axis of performance. They’re finding more value in iterations, efficiency and incremental scaling, forcing them to adopt DevOps methodologies. This focus on time-to-market is driving some of the most cutting-edge infrastructure technology that we have ever seen. Technologies like containers and Kubernetes, and a focus on stable, consistent and open APIs allow small teams to make amazing progress and move at the speeds they require. These technologies have reduced the friction and time to market.

The adoption of these technologies isn’t perfect, and as companies deploy them at scale, they realize that they have inadvertently increased complexity and de-centralized ownership and control. In many cases, it’s challenging to understand the entire system.

A service mesh enables DevOps teams by helping manage this complexity. It provides autonomy and freedom for development teams through a stable and scalable platform, while simultaneously providing a way for platform teams to enforce security, policy and compliance standards.

This empowers your development teams to make choices based on the problems they are solving rather than being concerned with the underlying infrastructure. Dev teams now have the freedom to deploy code without the fear of violating compliance or regulatory guidelines, and platform teams can put guardrails in place to ensure your applications are secure and resilient.

Want to learn more? Get the Complete Guide to Service Mesh here.

From NASA to Service Mesh

The New Stack recently published a podcast featuring our CTO, Andrew Jenkins discussing How Service Meshes Found a Former Space Dust Researcher. In the podcast, Andrew talks about how he moved from working on electrical engineering and communication protocols for NASA to software and finally service mesh development here at Aspen Mesh.

“My background is in electrical engineering, and I used to work a lot more on the hardware side of it, but I did get involved in communication, almost from the physical layer, and I worked on some NASA projects and things like that,” said Jenkins. “But then my career got further and further up into the software side of things, and I ended up at a company called F5 Networks. [Eventually] this ‘cloud thing’ came along, and F5 started seeing a lot of applications moving to the cloud. F5 offers their product in a version that you use in AWS, so what I was working on was an open source project to make a Kubernetes ingress controller for the F5 device. That was successful, but what we saw was that a lot of the traffic was shifting to the inside of the Kubernetes cluster. It was service-to-service communication from all these tiny things--these microservices--that were designed to be doing business logic. So this elevated the importance of communication...and that communication became very important for all of those tiny microservices to work together to deliver the final application experience for developers. So we started looking at that microservice communication inside and figuring out ways to make that more resilient, more secure and more observable so you can understand what’s going on between your applications.”

In addition, the podcast covers the evolution of service mesh, more details about tracing and logging, canaries, Kubernetes, YAML files and other surrounding technologies that extend service mesh to help simplify microservices management.

“I hope service meshes become the [default] way to deal with distributed tracing or certificate rotation. So, if you have an application, and you want it to be secure, you have to deal with all these certs, keys, etc.,” Jenkins said. “It’s not impossible, but when you have microservices, you do not have to do it a whole lot more times. So that’s why you get this better bang for the buck by pushing that down into that service mesh layer where you don’t have to repeat it all the time.”

To listen to the entire podcast, visit The News Stack’s post.

Interested in reading more articles like this? Subscribe to the Aspen Mesh blog:

The Complete Guide to Service Mesh

What’s Going On In The Service Mesh Universe?

Service meshes are relatively new, extremely powerful and can be complex. There’s a lot of information out there on what a service mesh is and what it can do, but it’s a lot to sort through. Sometimes, it’s helpful to have a guide. If you’ve been asking questions like “What is a service mesh?” “Why would I use one?” “What benefits can it provide?” or “How did people even come up with the idea for service mesh?” then The Complete Guide to Service Mesh is for you.

Check out the free guide to find out:

  • The service mesh origin story
  • What a service mesh is
  • Why developers and operators love service mesh
  • How a service mesh enables DevOps
  • Problems a service mesh solves

The Landscape Right Now

A service mesh overlaps, complements, and in some cases, replaces many tools that are commonly used to manage microservices. Last year was all about evaluating and trying out service meshes. But while curiosity about service mesh is still at a peak, enterprises are already in the evaluation and adoption process.

The capabilities service mesh can add to ease managing microservices applications at runtime are clearly exciting to early adopters and companies evaluating service mesh. Conversations tell us that many enterprises are already using microservices and service mesh, and many others are planning to deploy in the next six months. And if you’re not yet sure about whether or not you need a service mesh, check out the recent Gartner, 451 and IDC reports on microservices — all of which say a service mesh will be mandatory by 2020 for any organization running microservices in production.

Get Started with Service Mesh

Are you already using Kubernetes and Istio? You might be ready to get started using a service mesh. Download Aspen Mesh here or contact us to talk with a service mesh expert about getting set up for success.

Get the Guide

Fill out the form below to get your copy of The Complete Guide to Service Mesh.

Expanding Service Mesh Without Envoy

Istio uses the Envoy sidecar proxy to handle traffic within the service mesh.  The following article describes how to use an external proxy, F5 BIG-IP, to integrate with an Istio service mesh without having to use Envoy for the external proxy.  This can provide a method to extend the service mesh to services where it is not possible to deploy an Envoy proxy.

This method could be used to secure a legacy database to only allow authorized connections from a legacy app that is running in Istio, but not allow any other applications to connect.

Securing Legacy Protocols

A common problem that customers face when deploying a service mesh is how to restrict access to an external service to a limited set of services in the mesh.  When all services can run on any nodes it is not possible to restrict access by IP address (“good container” comes from the same IP as “malicious container”).

One method of securing the connection is to isolate an egress gateway to a dedicated node and restrict traffic to the database from those nodes.  This is described in Istio’s documentation:

Istio cannot securely enforce that all egress traffic actually flows through the egress gateways. Istio only enables such flow through its sidecar proxies. If attackers bypass the sidecar proxy, they could directly access external services without traversing the egress gateway. Thus, the attackers escape Istio’s control and monitoring. The cluster administrator or the cloud provider must ensure that no traffic leaves the mesh bypassing the egress gateway.

   -- (2019-03-25)

Another method would be to use mesh expansion to install Envoy onto the VM that is hosting your database. In this scenario the Envoy proxy on the database server would validate requests prior to forwarding them to the database.

The third method that we will cover will be to deploy a BIG-IP to act as an egress device that is external to the service mesh.  This is a hybrid of mesh expansion and multicluster mesh.

Mesh Expansion Without Envoy

Under the covers Envoy is using mutual TLS to secure communication between proxies.  To participate in the mesh, the proxy must use certificates that are trusted by Istio; this is how VM mesh expansion and multicluster service mesh are configured with Envoy.  To use an alternate proxy we need to have the ability to use certificates that are trusted by Istio.

Example of Extending Without Envoy

A proof-of-concept of extending the mesh can be taken with the following example.  We will create an “echo” service that is TCP based that will live outside of the service mesh.  The goal will be to restrict access to only allow authorized “good containers” to connect to the “echo” service via the BIG-IP.  The steps involved.

  1. Retrieve/Create certificates trusted by Istio
  2. Configure external proxy (BIG-IP) to use trusted certificates and only trust Istio certificates
  3. Add policy to external proxy to only allow “good containers” to connect
  4. Register BIG-IP device as a member of the Istio service mesh
  5. Verify that “good container” can connect to “echo” and “bad container” cannot

First we install a set of certificates on the BIG-IP that Envoy will trust and configure the BIG-IP to only allow connections from Istio.  The certs could either be pulled directly from Kubernetes (similar to setting up mesh expansion) or generated by a common CA that is trusted by Istio (similar to multicluster service mesh).

Once the certs are retrieved/generated we install them onto the proxy, BIG-IP, and configure the device to only trust client side certificates that are generated by Istio.

To enable a policy to validate the identity of the “good container” we will inspect the X509 Subject Alternative Name fields of the client certificate to inspect the spiffe name that contains the identity of the container.

Once the external proxy is configured we can register the device using “istioctl register” (similar to mesh expansion).

To verify that our test scenario is working we will have two namespaces “default” and “trusted”.  Connections from “trusted” will be allowed and “default” will be reject.  From each namespace we create a pod and run the command “nc bigip.default.svc.cluster.local 9000”.  Looking at our BIG-IP logs we can verify that our policy (iRule) worked:

Mar 25 18:56:39 ip-10-1-1-7 info tmm5[17954]: Rule /Common/log_cert <CLIENTSSL_CLIENTCERT>: allowing: spiffe://cluster.local/ns/trusted/sa/sleep
Mar 25 18:57:00 ip-10-1-1-7 info tmm2[17954]: Rule /Common/log_cert <CLIENTSSL_CLIENTCERT>: rejecting spiffe://cluster.local/ns/default/sa/default

Connection from our “good container”

/ # nc bigip.default.svc.cluster.local

Connection from our “bad container”

# nc bigip.default.svc.cluster.local 9000

In the case of the “bad container” we are unable to connect.  The “nc”, netcat, command is simulating a very basic TCP client.  A more realistic example would be connecting to an external database that contains sensitive data.  In the “good” example we are echo’ing back the capitalized input (“hi” becomes “HI”).

Just One Example

In this article we looked at expanding a service mesh without Envoy.  This was focused on egress TCP traffic, but it could be expanded to:

  • Using BIG-IP as an SNI proxy instead of NGINX
  • Securing inbound traffic using mTLS and/or JWT tokens
  • Using BIG-IP as an ingress gateway
  • Using ServiceEntry/DestinationRules instead of registered service

If you want to see the process in action, check out this short video walkthrough.

Let me know in the comments whether you’re interested in any of these use-cases or come-up with your own.  Thank you!

Why Service Meshes, Orchestrators Are Do or Die for Cloud Native Deployments

The self-contained, ephemeral nature of microservices comes with some serious upside, but keeping track of every single one is a challenge, especially when trying to figure out how the rest are affected when a single microservice goes down. The end result is that if you’re operating or developing in a microservices architecture, there’s a good chance part of your days are spent wondering what the hell your services are up to.

With the adoption of microservices, problems also emerge due to the sheer number of services that exist in large systems. Problems like security, load balancing, monitoring and rate limiting that had to be solved once for a monolith, now have to be handled separately for each service.

The good news is that engineers love a good challenge. And almost as quickly as they are creating new problems with microservices, they are addressing those problems with emerging microservices tools and technology patterns. Maybe the emergence of microservices is just a smart play by engineers to ensure job security.

Today’s cloud native darling, Kubernetes, eases many of the challenges that come with microservices. Auto-scheduling, horizontal scaling and service discovery solve the majority of build-and-deploy problems you’ll encounter with microservices.

What Kubernetes leaves unsolved is a few key containerized application runtime issues. That’s where a service mesh steps in. Let’s take a look at what Kubernetes provides, and how Istio adds to Kubernetes to solve the microservices runtime issues.

Kubernetes Solves Build-and-Deploy Challenges

Managing microservices at runtime is a major challenge. A service mesh helps alleviate this challenge by providing observability, control and security for your containerized applications. Aspen Mesh is the fully supported distribution of Istio that makes service mesh simple and enterprise-ready.

Kubernetes supports a microservice architecture by enabling developers to abstract away the functionality of a set of pods, and expose services to other developers through a well-defined API. Kubernetes enables L4 load balancing, but it doesn’t help with higher-level problems, such as L7 metrics, traffic splitting, rate limiting and circuit breaking.

Service Mesh Addresses Challenges of Managing Traffic at Runtime

Service mesh helps address many of the challenges that arise when your application is being consumed by the end user. Being able to monitor what services are communicating with each other, if those communications are secure and being able to control the service-to-service communication in your clusters are key to ensuring applications are running securely and resiliently.

Istio also provides a consistent view across a microservices architecture by generating uniform metrics throughout. It removes the need to reconcile different types of metrics emitted by various runtime agents, or add arbitrary agents to gather metrics for legacy un-instrumented apps. It adds a level of observability across your polyglot services and clusters that is unachievable at such a fine-grained level with any other tool.

Istio also adds a much deeper level of security. While Kubernetes only provides basic secret distribution and control-plane certificate management, Istio provides mTLS capabilities so you can encrypt on the wire traffic to ensure your service-to-service communications are secure.

A Match Made in Heaven

Pairing Kubernetes with a service mesh-like Istio gives you the best of both worlds and since Istio was made to run on Kubernetes, the two work together seamlessly. You can use Kubernetes to manage all of your build and deploy needs and Istio takes care of the important runtime issues.

Kubernetes has matured to a point that most enterprises are using it for container orchestration. Currently, there are 74 CNCF-certified service providers — which is a testament to the fact that there is a large and growing market. I see Istio as an extension of Kubernetes and a next step to solving more challenges in what feels like a single package.

Already, Istio is quickly maturing and is starting to see more adoption in the enterprise. It’s likely that in 2019 we will see Istio emerge as the service mesh standard for enterprises in much the same way Kubernetes has emerged as the standard for container orchestration.