Get App-focused Security from an Enterprise-class Service Mesh | On-demand Webinar

In our webinar you can now view on demand, You’ve Got Kubernetes. Now you Need App-focused Security using Istio, we teamed with Mirantis, an industry leader in enterprise-ready Kubernetes deployment and management, to talk about security, Kubernetes, service mesh, istio and more. If you have Kubernetes, you’re off to a great start with a great platform for security based on Microsegmentation and Network Policy. But firewalls and perimeters aren’t enough -- even in their modern, in-cluster form.  

As enterprises embark on the cloud journey, modernizing applications with microservices and containers running on Kubernetes is key to application portability, code reuse and automation. But along with these advantages come significant security and operational challenges due to security threats at various layers of the stack. While Kubernetes platform providers like Mirantis manage security at the infrastructure, orchestration and container level, the challenge at application services level remains a concern. This is where a service mesh comes in. 

Companies with a hyper focus on security – like those in healthcare, finance, government, and highly regulated industries – demand the highest level of security possible to thwart cyberthreats, data breaches and non-compliance issues. You can up level your security by adding a service mesh that’s able to secure thousands of connections between microservices containers inside of a single cluster or across the globe. Today Istio is the gold standard for enterprise-class service mesh for building Zero Trust Security. But I’m not the first to say that implementing open source Istio has its challenges -- and can cause a lot of headaches when Istio deployment and management is added to a DevOps team’s workload without some forethought.  

Aspen Mesh delivers an Istio-based, security hardened enterprise-class service mesh that’s easy to manage. Our Istio solution reduces friction between the experts in your organization because it understands your apps -- and it seamlessly integrates into your SecOps approach & certificate authority architecture. 

It’s not just about what knobs and config you adjust to get mTLS in one cluster – in our webinar we covered the architectural implications and lessons learned that’ll help you fit service mesh into your up-leveled Kubernetes security journey. It was a lively discussion with a lot of questions from attendees. Click the link below to watch the live webinar recording.

-Andrew

 

Click to watch webinar now:

On Demand Webinar | You’ve Got Kubernetes. Now you Need App-focused Security using Istio.

 The webinar gets technical as we delve into: 

  • How Istio controls North-South and East-West traffic, and how it relates to application-level traffic. 
  • How Istio secures communication between microservices. 
  • How to simplify operations and prevent security holes as the number of microservices in production grows. 
  • What is involved in hardening Istio into an enterprise-class service mesh. 
  • How mTLS provides zero-trust based approach to security. 
  • How Aspen Mesh uses crypto to give each container its own identity (using a framework called SPIFFE). Then when containers talk to each other through the service mesh, they prove who they are cryptographically. 
  • Secure ingress and egress, and Cloud Native packet capture. 

Aspen Mesh Leads the Way for a Secure Open Source Istio

Here at Aspen Mesh, we entrenched ourselves in the Istio project not long after its start. Recognizing Istio's potential early on, we committed to building our entire company with Istio at its core. From the early days of the project, Aspen Mesh took an active role in Istio -- we've been part of the community since Fall of 2017. Among our many firsts, Aspen Mesh was the first non-founding company to have someone on the Technical Oversight Committee (TOC) and have a release manager role when we helped manage the release of Istio v1.6 in 2020.

Ensuring open source Istio continues to set the standard as the foundation for a secure enterprise-class service mesh is important to us. I hold a seat on the Istio Product Security Working Group (PSWG), where we continuously monitor and address potential Common Vulnerability and Exposures (CVEs) reports for Istio and its dependencies like the Envoy project. In fact, we helped create the PSWG in collaboration with other community leaders to ensure Istio remains a secure project with well-defined practices around responsible early disclosures and incident management.

Along with me, my colleague, Jacob Delgado has been a tremendous contributor to Istio's security and he currently leads the Product Security Working Group.

Aspen Mesh leads contribution to Open Source Istio

The efforts of Aspen 'Meshers' can be seen across Istio's architecture today, and we add features to open source Istio regularly. Some of the major features we've added include Elliptic Curve Cryptography (ECC) support, Configuration validation (istio-vet -> Istio analyzers), custom tracing tags, and Help v3 support. We are a Top 5 Istio Contributor of Pull Requests (PRs). One of our primary areas of focus is helping to shape and harden Istio's security. We have responsibly reported several critical CVEs and addressed them as part of PSWG like the Authentication Policy Bypass CVE. You can read more about how security releases and 0-day critical CVE patches are handled in Istio in this blog authored by my colleague Jacob.

Istio Security Assessment Report findings announced in 2021

The success of the Istio project and its critical use enforcing key security policies in infrastructure across a wide swath of industries was the impetus for a comprehensive security assessment that began in 2020. In order to determine whether there were any security issues in the Istio code base, a third-party security assessment of the Istio project was conducted last year that enlisted the NCC Group and sought collaboration with subject matter experts across the community.

This in-depth assessment focused on Istio’s architecture as a whole, looking at security related issues with a focus on key components like istiod (Pilot), Ingress/Egress gateways, and Istio’s overall Envoy usage as its data plane proxy for Istio version 1.6.5. Since the report, the Product Security Working Group has issued several security releases as new vulnerabilities were disclosed, along with fixes to address concerns raised in the report. A good outcome of the report is the detailed Security Best Practices Guide developed for Istio users.

I invite you to read a summary of the Istio Security Assessment Report I compiled for the Istio community. I detail the key areas of the report and distill what it means for Istio users today and looking ahead. Whether you're a current open source Istio user, like keeping up on all things security, or you want a deep dive into Istio Security, you can get the highlights from the blog I posted for the Istio community -- and from there you can also download the entire report.

At Aspen Mesh, we build upon the security features Istio provides and address enterprise security requirements with a zero-trust based service mesh that provides security within the Kubernetes cluster, provides monitoring and alerts, and ensures highly-regulated industries maintain compliance. You can read about how we think about security in our white paper, Adopting a Zero-Trust Approach to Security for Containerized Applications.

If you'd like to talk to us about what enterprise security in a service mesh looks like, please get in touch!

-Neeraj

 

istio test stats from cncf.io

Sailing Faster with Istio

While the extraordinarily large shipping container, Ever Given, ran aground in the Suez Canal, halting a major trade route that has caused losses in the billions, our solution engineers at Aspen Mesh have been stuck diagnosing a tricky Istio and Envoy performance bottleneck on their own island for the past few weeks. Though the scale and global impacts of these two problems is quite different, it has presented an interesting way to correlate a global shipping event with the metaphorical nautical themes used by Istio. To elaborate on this theme, let’s switch from containers carrying dairy, and apparently everything else under the sun, to containers shuttling network packets.

To unlock the most from containers and microservices architecture, Istio (and Aspen Mesh) uses a sidecar proxy model. Adding sidecar proxies into your mesh provides a host of benefits, from uniform identity to security to metrics and advanced traffic routing. As Aspen Mesh customers range from large enterprises all the way to service providers, the performance impacts of adding these sidecars is as important to us as the benefits outlined above. The performance experiment that I’m going to cover in this blog is geared toward evaluating the impact of adding sidecar proxies in high throughput scenarios on the server or client, or both sides.

We have encountered workloads, especially in the service provider space, where there are high requests or transactions-per-second requirements for a particular service. Also, scaling up — i.e., adding more CPU/memory — is preferable to scaling out. We wanted to test the limits of sidecar proxies with regards to the maximum achievable throughput so that we can tune and optimize our model to meet the performance requirements of the wide variety of workloads used by our customers.

Throughput Test Setup

The test setup we used for this experiment was rather simple: a Fortio client and server running on Kubernetes on large AWS node instance types like burstable t3.2xlarge with 8 vCPUs and 32 GB of memory or dedicated m5.8xlarge instance types which have 32 vCPUs and 128 GB of memory. The test was running a single instance of the Fortio client and server pod with no resource constraints on their own dedicated nodes. The Fortio client was run in a mode to maximize throughput like this:

The above command runs the test for 60 seconds with queries per second (QPS) 0 (i.e. maximum throughput with a varying number of simultaneous parallel connections). With this setup on a t3.2xlarge machine, we were able to achieve around 100,000 QPS. Further increasing the number of parallel connections didn’t result in throughput beyond ~100K QPS, signaling a possible CPU bottleneck. Running the same experiment on an m5.8xlarge instance, we could achieve much higher throughput around 300,000 QPS or higher depending upon the parallel connection settings.

This was sufficient proof of CPU throttling. As adding more CPUs increased the QPS, we felt that we had a reasonable baseline to start evaluating the effects of adding sidecar proxies in this setup.

Adding Sidecar Proxies on Both Ends

Next, with the same setup on t3.2xlarge instances, we added Istio sidecar proxies on both Fortio client and server pods with Aspen Mesh default settings; mTLS STRICT setting, access logging enabled and the default concurrency (worker threads) of 2. With these parameters, and running the same command as before, we could only get a maximum throughput of around ~10,000 QPS.

This is a factor of 10 reduction in throughput. This was expected as we had only configured two worker threads, which were hopefully running at their maximum capacity but could not keep up with client load.

So, the logical next step for us was to increase the concurrency setting to run more worker threads to accept more connections and achieve higher throughput. In Istio and Aspen Mesh, you can set the proxy concurrency globally via the concurrency setting in proxy config under mesh config or override them via pod annotations like this:

Note that using the value “0” for concurrency configures it to use all the available cores on the machine. We increased the concurrency setting from two to four to six and saw a steady increase in maximum throughput from 10K QPS to ~15K QPS to ~20K QPS as expected. However, these numbers were still quite low (by a factor of five) as compared to the results with no sidecar proxies.

To eliminate the CPU throttling factor, we ran the same experiment on m5.8xlarge instances with even higher concurrency settings but the maximum throughput we could achieve was still around ~20,000 QPS.

This degradation was far from acceptable, so we dug into why the throughput was low even with sufficient worker threads configured on the sidecar proxies.

Peeling the Onion

To investigate this issue, we looked at the CPU utilization metrics in the server pod and noticed that the CPU utilization as a percentage of total requested CPUs was not very high. This seemed odd as we expected the proxy worker threads to be spinning as fast as possible to achieve the maximum throughput, so we needed to investigate further to understand the root cause.

To get a better understanding of low CPU utilization, we inspected the connections received by the server sidecar proxy. Envoy’s concurrency model relies on the kernel to distribute connections between the different worker threads listening on the same socket. This means that if the number of connections received at the server sidecar proxy is less than the number of worker threads, you can never fully use all CPUs.

As this investigation was purely on the server-side, we ran the above experiment again with the Fortio client pod, but this time without the sidecar proxy injected and only the Fortio server pod with the proxy injected. We found that the maximum throughput was still limited to around ~20K QPS as before, thereby hinting at issues on the server sidecar proxy.

To investigate further, we had to look at connection level metrics reported by Envoy proxy. Later in this article, we’ll see what happens to this experiment with Envoy metrics exposed. (By default, Istio and Aspen Mesh don’t expose the connection-level metrics from Envoy.)

These metrics can be enabled in Istio version 1.8 and above by following this guide and adding the appropriate pod annotations corresponding to the metrics you want to be exposed. Envoy has many low-level metrics emitted at high resolution that can easily overwhelm your metrics backend for a moderately sized cluster, so you should enable this cautiously in production environments.

Additionally, it can be quite a journey to find the right Envoy metrics to enable, so here’s what you will need to get connection-level metrics. On the server-side pod, add the following annotation:

This will enable reporting for all listeners configured by Istio, which can be a lot depending upon the number of services in your cluster, but only enable the downstream connections total counter and downstream connections active gauge metrics.

To look at these metrics, you can use your Prometheus dashboard, if it’s enabled, or port-forward to the server pod under test to port 15000 and navigate to http://localhost:15000/stats/prometheus. As there are many listeners configured by Istio, it can be tricky to find the correct one. Here’s a quick primer on how Istio sets up Envoy configuration. (You can find the complete list of Envoy listener metrics here.)

For any inbound connections to a pod from clients outside of the pod, Istio configures a virtual inbound listener at 0.0.0.0:15006, which receives all the traffic from iptables’ redirect rules. This is the only listener that’s actually configured to receive connections from the kernel, and after the connection is received, it is matched against filter chain attributes to proxy the traffic to the correct application port on localhost. This means that even though the Fortio client above is targeting port 8080, we need to look at the total and active connections for the virtual inbound listener at 0.0.0.0:15006 instead of 0.0.0.0:8080. Looking at this metric, we found that the number of active connections were close to the configured number of simultaneous connections on the Fortio client side. This invalidated our theory about the number of connections being less than worker threads.

The next step in our debugging journey was to look at the number of connections received on each worker thread. As I had alluded to earlier, Envoy relies on the kernel to distribute the accepted connections to different worker threads, and for all the worker threads to be fully utilizing the allotted CPUs, the connections also need to be fairly balanced. Luckily, Envoy has per-worker metrics for listeners that can be enabled to understand the distribution. Since these metrics are rooted at listener.<address>.<handler>.<metric name>, the regex provided in the annotation above should also expose these metrics. The per-worker metrics looked like this:

As you can see from the above image, the connections were far from being evenly distributed among the worker threads. One thread, worker 10, had 11.5K active connections as compared to some threads which had around ~1-1.5K active connections, and others were even lower. This explains the low CPU utilization numbers as most of the worker threads just didn’t have enough connections to do useful work.

In our Envoy research, we quickly stumbled upon this issue, which very nicely sums up the problem and the various efforts that have been made to fix it.

Image via Pixabay.

So, next, we went looking for a solution to fix this problem. It seemed like, for the moment, our own Ever Given was stuck as some diligent worker threads struggled to find balance. We needed an excavator to start digging.

While our intrepid team tackled the problem of scaling for high-throughput workloads by adding sidecar proxies, we encountered a bottleneck not entirely unlike what the Ever Given experienced not long ago in the Suez Canal.

Luckily, we had a few more things to try, and we were ready to take a closer look at the listener metrics.

Let There Be Equality Among Threads!

After parsing through the conversations in the issue, we found the pull request that enabled a configuration option to turn on a feature to achieve better balancing across worker threads. At this point, trying this out seemed worthwhile, so we looked at how to enable this in Istio. (Note that as part of this PR, the per-worker thread metrics were added, which was useful in diagnosing this problem.)

For all the ignoble things EnvoyFilter can do in Istio, it’s useful in situations like these to quickly try out new Envoy configuration knobs without making code changes in “istiod” or the control plane. To turn the “exact balance” feature on, we created an EnvoyFilter resource like this:

With this configuration applied and with bated breath, we ran the experiment again and looked at the per-worker thread metrics. Voila! Look at the perfectly balanced connections in the image below:

Measuring the throughput with this configuration set, we could achieve around ~80,000 QPS, which is a significant improvement over the earlier results. Looking at CPU utilization, we saw that all the CPUs were fully pegged at or near 100%. This meant that we were finally seeing the CPU throttling. At this point, by adding more CPUs and a bigger machine, we could achieve much higher numbers as expected. So far so good.

As you may recall, this experiment was purely to test the effects of server sidecar proxy, so we removed the client sidecar proxy for these tests. It was now time to measure performance with both sidecars added.

Measuring the Impacts of a Client Sidecar Proxy

With this exact balancing configuration enabled on the inbound port (server side only), we ran the experiment with sidecars on both ends. We were hoping to achieve high throughputs that could only be limited by the number of CPUs dedicated to Envoy worked threads. If only things were that simple.

We found that the maximum throughput was once again capped at around ~20K QPS.

A bit disappointing, but since we then knew about the issue of connection imbalance on the server side, we reasoned that the same could happen on the client side between the application and the sidecar proxy container on localhost. First, we enabled the following metrics on the client-side proxy:

In addition to the listener metrics, we also enabled cluster-level metrics, which emit total and active connections for any upstream cluster. We wanted to verify that the client sidecar proxy was sending a sufficient number of connections to the upstream Fortio server cluster to keep the server worker threads occupied. We found that the number of active connections mirrored the number of connections used by the Fortio client in our command. This was a good sign. Note that Envoy doesn’t report cluster-level metrics at the per-worker level, but these are all aggregated, so there’s no way for us to know how the connections were distributed on the outbound side.

Next, we inspected the listener connection statistics on the client side similar to the server side to ensure that we were not having connection imbalance issues. The outbound listeners, or the listeners set up to handle traffic originating from the application in the same pod as the sidecar proxy, are set up a bit differently in Istio as compared to the inbound side. For outbound traffic, a virtual listener “0.0.0.0:15001” is created similar to the listener on “0.0.0.0:15006,” which is the target for iptables redirect rules. Unlike the inbound side, the virtual listener hands off the connection to the more specific listener like “0.0.0.0:8080” based on the original destination address. If there are no specific matches, then the listener configuration in the virtual outbound takes effect. This can block or allow all traffic depending on your configured outbound traffic policy. In the traffic flow from the Fortio client to server, we expected the listener at “0.0.0.0:8080” to be handling connections on the client-side proxy, so we inspected connections metrics at this listener. The listener metrics looked like this:

The above image shows the connection imbalance issue between worker threads as we saw it on the server side. However, the connections on the outbound client-side proxy were only getting handled by one worker thread which explains the poor throughput QPS numbers. Having fixed this on the server-side, we applied a similar EnvoyFilter configuration with minor tweaks for context and port to address this imbalance:

Surely, applying this resource would fix our issue and we would be able to achieve high QPS with both client and server sidecar proxies with sufficient CPUs allocated to them. Well, we ran the experiment again and saw no difference in the throughput numbers. Checking the listener metrics again, we saw that even with this EnvoyFilter resource applied, only one worker thread was handling all the connections. We also tried applying the exact balance config on both virtual outbound port 15001 and outbound port 8080, but the throughput was still limited to 20K QPS.

This warranted the next round of investigations.

Original Destination Listeners, Exact Balance Issues

We went around looking in Envoy code and opened Github issues to understand why the client-side exact balance configuration was not taking effect, while the server side was working wonders. The key difference between the two listeners, other than the directionality, was that the virtual outbound listener “0.0.0.0:15001” was an original destination listener, which hands over connections to other listeners matched on the original destination address. With help from the Istio community (thanks, Yuchen Dai from Google), we found this open issue, which explains this behavior in a rather cryptic way.

Basically, the current exact balance implementation relies on connection counters per worker thread to fix the imbalance. When the original destination is enabled on the virtual outbound listener, the connection counter on the worker thread is incremented when a connection is received, but as the connection is immediately handed to the more specific listener like “0.0.0.0:8080,” it is decremented again. This quick increase and decrease in the internal count spoofs the exact balancer into thinking the balance is perfect as all these counters are always at zero. It also appears that applying the exact balance on the listener that handles the connection, “0.0.0.0:8080” in this case, but doesn’t accept the connection from the kernel has no effect due to current implementation limitations.

Fortunately, the fix for this issue is in progress, and we’ll be working with the community to get this addressed as quickly as possible. In the meantime, if you’re getting hit by these performance issues on the client side, scaling out with a lower concurrency setting is a better approach to reach higher throughput QPS numbers than scaling up with higher concurrency and worker threads. We are also working with the Istio community to provide configuration knobs for enabling exact balance in Envoy to optionally switch default settings so that everyone can benefit from our findings.

Working on this performance analysis was interesting and a challenge in its own way, like the small tractor next to the giant ship trying to make it move.

Well, maybe not exactly, but it was a learning experience for me and my team, and I’m glad we are able to share our learnings with the rest of the community as this aspect of Istio is often overlooked by the broader vendor ecosystem. We will run and publish performance numbers related to the impact of turning on various features such as mTLS, access logging and tracing in high-throughout scenarios in future blogs, so if you’re interested in this topic, subscribe to our blog to get updates or reach out to us with any questions.

I would like to give a special mention to my team members Pawel and Bart who patiently and diligently ran various test scenarios, collected data and were uncompromising in their pursuit to get the last bit out of Istio and Aspen Mesh. It’s not surprising. After all, being part of F5, taking performance seriously is just part of our DNA. 


How to Capture Packets that Don't Exist

How to Capture Packets that Don’t Exist

One of my favorite networking tools is Wireshark; it shows you a packet-by-packet view of what’s going on in your network. Wireshark’s packet capture view is the lowest level and most extensive you can get before you have to bust out the oscilloscope. This practice is well-established in the pre-Kubernetes world, but it has some challenges if you’re moving to a Cloud Native environment. If you are using or moving to Cloud Native, you’re going to want to use packet-level tools and techniques in any environment, so that’s why we built Aspen Mesh Packet Inspector. It’s designed to address these challenges across various environments, so you can more easily see what’s going on in your network without the complexity.  

Let me explain the challenges facing our users moving into a Kubernetes world. It’s important to note that there are two parts to a troubleshooting session based on packet capture: actually capturing the packets, and then loading them into your favorite tool. Aspen Mesh Packet Inspector enables users to capture packets even in Kubernetes. That’s the part you need to address to power all the existing tools you probably already have.  Leveraging the existing tools is important as our customers have invested heavily in them.  And not just monetarily – their reputation for reliable apps, services and networks depend on the reliability and usefulness of the tools, procedures and experience powered by a packet view. 

What’s so hard about capturing these packets in modern app architectures on Kubernetes? The two biggest challenges are that these packets may never be actual packets that go through a switch, and even if they were, they’d be encrypted and useless. 

Outside of the Kubernetes world, there are many different approaches to capture packets. You can capture packets right on your PC to debug a local issue. For serious network debugging, you’re usually capturing packets directly on networking hardware, like a monitor port on a switch, or dedicated packet taps or brokers. But in Kubernetes, some traffic will never hit a dedicated switch or tap. Kubernetes is used to schedule multiple containers onto the same physical or virtual machine. If one container wants to talk to another container that happens to be on the same machine, then the packets exchanged between them are virtual – they're just bytes in RAM that the operating system shuffles between containers. 

There’s no guarantee that the two containers that you care about will be scheduled onto the same machine, and there’s no guarantee that they won’t beIn fact, if you know two containers are going to want to talk to each other a lot, it’s a good idea to encourage scheduling on the same node for performance: these virtual packets don’t consume any capacity on your switch and advanced techniques can accelerate container-to-container traffic inside a machine. 

Customers that stake their reputation on reliability don’t like mixing “critical tool” and “no guarantee”.  They need to capture traffic right at the edge of the container. That’s what Aspen Mesh Packet Inspector does. It’s built into Carrier-Grade Aspen Mesh, a service mesh purpose built for these critical applications. 

There’s still a problem though – if you are building apps on Kubernetes, you should be encrypting traffic between pods. It’s a best practice that is also required by various standards including those behind 5G.  In the past, capture tools have relied on access to the encryption key to show the decrypted info. New encryption like TLS1.3 has a feature called “forward secrecy” that impedes this. Forward secrecy means every connection is protected with its own temporary key that was securely created by the client and the server – if your tool wasn’t in-the-middle when this key was generated, it’s too late. Access to the server’s encryption key later won't work. 

One approach is to force a broker or tap into the middle for all connections. But that means you need a powerful (i.e. expensive) broker, and it’s a single-point-of-failure. Worse, it’s a security single-point-of-failure: everything in the network has to trust it to get in the middle of all conversations. 

Our users already have something better suited – an Aspen Mesh sidecar (built on Envoy). They’re already using a sidecar next to each container to offload encryption using strong techniques like mutual TLS with forward secrecy. Each sidecar has only one security identity for the particular app container it is protecting, so sidecars can safely authenticate each other without any trusted-box-in-the-middle games. 

That’s the second key part of Aspen Mesh Packet Inspector – because Aspen Mesh is where the plaintext-to-encrypted operation happens (right before leaving the Kubernetes pod), we can record the plaintext. We capture the plaintext and slice it into virtual packets (in a standard “pcap” format). When we feed it to a capture system like a packet broker, we use mutual TLS to protect the captured data.  Our users combine this with a secure packet broker, and get to see the plaintext that was safely and securely transported all the way from the container edge to their screen. 

If you’re a service provider operating Kubernetes at scale, packet tapping capabilities are critical for you to be able to operate the networks effectively, securely and within regulatory and compliance standards. Aspen Mesh Packet Inspector provides the missing link in Kubernetes, providing full packet visibility for troubleshooting and meeting lawful intercept requirements.  


doubling down on istio

Doubling Down On Istio

Good startups believe deeply that something is true about the future, and organize around it.

When we founded Aspen Mesh as a startup inside of F5, my co-founders and I believed these things about the future:

  1. App developers would accelerate their pace of innovation by modularizing and building APIs between modules packaged in containers.
  2. Kubernetes APIs would become the lingua franca for describing app and infrastructure deployments and Kubernetes would be the best platform for those APIs.
  3. The most important requirement for accelerating is to preserve control without hindering modularity, and that’s best accomplished as close to the app as possible.

We built Aspen Mesh to address item 3. If you boil down reams of pitch decks, board-of-directors updates, marketing and design docs dating back to summer of 2017, that's it. That's what we believe, and I still think we're right.

Aspen Mesh is a service mesh company, and the lowest levels of our product are the open-source service mesh Istio. Istio has plenty of fans and detractors; there are plenty of legitimate gripes and more than a fair share of uncertainty and doubt (as is the case with most emerging technologies). With that in mind, I want to share why we selected Istio and Envoy for Aspen Mesh, and why we believe more strongly than ever that they're the best foundation to build on.

 

Why a service mesh at all?

A service mesh is about connecting microservices. The acceleration we're talking about relies on applications that are built out of small units (predominantly containers) that can be developed and owned by a single team. Stitching these units into an overall application requires APIs between them. APIs are the contract. Service Mesh measures and assists contract compliance. 

There's more to it than reading the 12-factor app. All these microservices have to effectively communicate to actually solve a user's problem. Communication over HTTP APIs is well supported in every language and environment so it has never been easier to get started.  However, don't let the simplicity delude: you are now building a distributed system. 

We don't believe the right approach is to demand deep networking and infrastructure expertise from everyone who wants to write a line of code.  You trade away the acceleration enabled by containers for an endless stream of low-level networking challenges (as much as we love that stuff, our users do not). Instead, you should preserve control by packaging all that expertise into a technology that lives as close to the application as possible. For Kubernetes-based applications, this is a common communication enhancement layer called a service mesh.

How close can you get? Today, we see users having the most success with Istio's sidecar container model. We forecasted that in 2017, but we believe the concept ("common enhancement near the app") will outlive the technical details.

This common layer should observe all the communication the app is making; it should secure that communication and it should handle the burdens of discovery, routing, version translation and general interoperability. The service mesh simplifies and creates uniformity: there's one metric for "HTTP 200 OK rate", and it's measured, normalized and stored the same way for every app. Your app teams don't have to write that code over and over again, and they don't have to become experts in retry storms or circuit breakers. Your app teams are unburdened of infrastructure concerns so they can focus on the business problem that needs solving.  This is true whether they write their apps in Ruby, Python, node.js, Go, Java or anything else.

That's what a service mesh is: a communication enhancement layer that lives as close to your microservice as possible, providing a common approach to controlling communication over APIs.

 

Why Istio?

Just because you need a service mesh to secure and connect your microservices doesn't mean Envoy and Istio are the only choice.  There are many options in the market when it comes to service mesh, and the market still seems to be expanding rather than contracting. Even with all the choices out there, we still think Istio and Envoy are the best choice.  Here's why.

We launched Aspen Mesh after learning some lessons with a precursor product. We took what we learned, re-evaluated some of our assumptions and reconsidered the biggest problems development teams using containers were facing. It was clear that users didn't have a handle on managing the traffic between microservices and saw there weren't many using microservices in earnest yet so we realized this problem would get more urgent as microservices adoption increased. 

So, in 2017 we asked what would characterize the technology that solved that problem?

We compared our own nascent work with other purpose-built meshes like Linkerd (in the 1.0 Scala-based implementation days) and Istio, and non-mesh proxies like NGINX and HAProxy. This was long before service mesh options like Consul, Maesh, Kuma and OSM existed. Here's what we thought was important:

  • Kubernetes First: Kubernetes is the best place to position a service mesh close to your microservice. The architecture should support VMs, but it should serve Kubernetes first.
  • Sidecar "bookend" Proxy First: To truly offload responsibility to the mesh, you need a datapath element as close as possible to the client and server.
  • Kubernetes-style APIs are Key: Configuration APIs are a key cost for users.  Human engineering time is expensive. Organizations are judicious about what APIs they ask their teams to learn. We believe Kubernetes API design and mechanics got it right. If your mesh is deployed in Kubernetes, your API needs to look and feel like Kubernetes.
  • Open Source Fundamentals: Customers will want to know that they are putting sustainable and durable technology at the core of their architecture. They don't want a technical dead-end. A vibrant open source community ensures this via public roadmaps, collaboration, public security audits and source code transparency.
  • Latency and Efficiency: These are performance keys that are more important than total throughput for modern applications.

As I look back at our documented thoughts, I see other concerns, too (p99 latency in languages with dynamic memory management, layer 7 programmability). But the above were the key items that we were willing to bet on. So it became clear that we had to palace our bet on Istio and Envoy. 

Today, most of that list seems obvious. But in 2017, Kubernetes hadn’t quite won. We were still supporting customers on Mesos and Docker Datacenter. The need for service mesh as a technology pattern was becoming more obvious, but back then Istio was novel - not mainstream. 

I'm feeling very good about our bets on Istio and Envoy. There have been growing pains to be sure. When I survey the state of these projects now, I see mature, but not stagnant, open source communities.  There's a plethora of service mesh choices, so the pattern is established.  Moreover the continued prevalence of Istio, even with so many other choices, convinces me that we got that part right.

 

But what about...?

While Istio and Envoy are a great fit for all those bullets, there are certainly additional considerations. As with most concerns in a nascent market, some are legitimate and some are merely noise. I'd like to address some of the most common that I hear from conversations with users.

"I hear the control plane is too complex" - We hear this one often. It’s largely a remnant of past versions of Istio that have been re-architected to provide something much simpler, but there's always more to do. We're always trying to simplify. The two major public steps that Istio has taken to remedy this include removing standalone Mixer, and co-locating several control plane functions into a single container named istiod.

However, there's some stuff going on behind the curtains that doesn't get enough attention. Kubernetes makes it easy to deploy multiple containers. Personally, I suspect the root of this complaint wasn't so much "there are four running containers when I install" but "Every time I upgrade or configure this thing, I have to know way too many details."  And that is fixed by attention to quality and user-focus. Istio has made enormous strides in this area. 

"Too many CRDs" - We've never had an actual user of ours take issue with a CRD count (the set of API objects it's possible to define). However, it's great to minimize the number of API objects you may have to touch to get your application running. Stealing a paraphrasing of Einstein, we want to make it as simple as possible, but no simpler. The reality: Istio drastically reduced the CRD count with new telemetry integration models (from "dozens" down to 23, with only a handful involved in routine app policies). And Aspen Mesh offers a take on making it even simpler with features like SecureIngress that map CRDs to personas - each persona only needs to touch 1 custom resource to expose an app via the service mesh.

"Envoy is a resource hog" - Performance measurement is a delicate art. The first thing to check is that wherever you're getting your info from has properly configured the system-under-measurement.  Istio provides careful advice and their own measurements here.  Expect latency additions in the single-digit-millisecond range, knowing that you can opt parts of your application out that can't tolerate even that. Also remember that Envoy is doing work, so some CPU and memory consumption should be considered a shift or offload rather than an addition. Most recent versions of Istio do not have significantly more overhead than other service meshes, but Istio does provide twice as many feature, while also being available in or integrating with many more tools and products in the market. 

"Istio is only for really complicated apps” - Sure. Don’t use Istio if you are only concerned with a single cluster and want to offload one thing to the service mesh. People move to Kubernetes specifically because they want to run several different things. If you've got a Money-Making-Monolith, it makes sense to leave it right where it is in a lot of cases. There are also situations where ingress or an API gateway is all you need. But if you've got multiple apps, multiple clusters or multiple app teams then Kubernetes is a great fit, and so is a service mesh, especially as you start to run things at greater scale.

In scenarios where you need a service mesh, it makes sense to use the service mesh that gives you a full suite of features. A nice thing about Istio is you can consume it piecemeal - it does not have to be implemented all at once. So you only need mTLS and tracing now? Perfect. You can add mTLS and tracing now and have the option to add metrics, canary, traffic shifting, ingress, RBAC, etc. when you need it.

We’re excited to be on the Istio journey and look forward to continuing to work with the open source community and project to continue advancing service mesh adoption and use cases. If you have any particular question I didn’t cover, feel free to reach out to me at @notthatjenkins. And I'm always happy to chat about the best way to get started on or continue with service mesh implementation. 


steering future of istio

Steering The Future Of Istio

I’m honored to have been chosen by the Istio community to serve on the Istio Steering Committee along with Christian Posta, Zack Butcher and Zhonghu Xu. I have been fortunate to contribute to the Istio project for nearly three years and am excited by the huge strides the project has made in solving key challenges that organizations face as they shift to cloud-native architecture. 

Maybe what’s most exciting is the future direction of the project. The core Istio community realizes and advocates that innovation in Open Source doesn't stop with technology - it’s just the starting point. New and innovative ways of growing the community include making contributions easier, Working Group meetings more accessible and community meetings an open platform for end users to give their feedback. As a member of the steering committee, one of my main goals will be to make it easier for a diverse group of people to more easily contribute to the project.

Sharing my personal journey with Istio, when I started contributing to Istio, I found it intimidating to present rough ideas or proposals in an open Networking WG meeting filled with experts and leaders from Google & IBM (even though they were very welcoming). I understand how difficult it can be to get started on contributing to a new community, so I want to ensure the Working Group and community meetings are a place for end users and new contributors to share ideas openly, and also to learn from industry experts. I will focus on increasing participation from diverse groups, through working to make Istio the most welcoming community possible. In this vein, it will be important for the Steering Committee to further define and enforce a code of conduct creating a safe place for all contributors.

The Istio community’s effort towards increasing open governance by ensuring no single organization has control over the future of the project has certainly been a step in the right direction with the new makeup of the steering committee. I look forward to continuing work in this area to make Istio the most open project it can be. 

Outside of code contributions, marketing and brand identity are critically important aspects of any open source project. It will be important to encourage contributions from marketing and business leaders to ensure we recognize non-technical contributions. Addressing this is less straightforward than encouraging and crediting code commits, but a diverse vendor neutral marketing team in Open Source can create powerful ways to reach users and drive adoption, which is critical to the success of any open source project. Recent user empathy sessions and user survey forms are a great starting point, but our ability to put these learning into actions and adapt as a community will be a key driver in growing project participation.

Last, but definitely not least, I’m keen to leverage my experience and feedback from years of work with Aspen Mesh customers and broad enterprise experience to make Istio a more robust and production-ready project. 

In this vein, my fellow Aspen Mesher Jacob Delgado has worked tirelessly for many months contributing to Istio. As a result of his contributions, he has been named a co-lead for the Istio Product Security Working Group. Jacob has been instrumental in championing security best practices for the project and has also helped responsibly remediate several CVEs this year. I’m excited to see more contributors like Jacob make significant improvements to the project.

I'm humbled by the support of the community members who voted in the steering elections and chose such a talented team to shepherd Istio forward. I look forward to working with all the existing, and hopefully many new, members of the Istio community! You can always reach out to me through email, Twitter or Istio Slack for any community, technical or governance matter, or if you just want to chat about a great idea you have.


Helping Istio Sail

Around three years ago, we recognized the power of service mesh as a technology pattern to help organizations manage the next generation of software delivery challenges, and that led to us founding Aspen Mesh. And we know that efficiently managing high-scale Kubernetes applications requires the power of Istio. Having been a part of the Istio project since 0.2, we have seen countless users and customers benefit from the observability, security and traffic management capabilities that a service mesh like Istio provides. 

Our engineering team has worked closely with the Istio community over the past three years to help drive the stability and add new features that make Istio increasingly easy for users to adopt. We believe in the value that open source software — and an open community  —  brings to the enterprise service mesh space, and we are driven to help lead that effort through setting sound technical direction. By contributing expertise, code and design ideas like Virtual Service delegation and enforcing RBAC policies for Istio resources, we have focused much of our work on making Istio more enterprise-ready. One of these contributions includes Istio Vet, which was created and open sourced in early days of Aspen Mesh as a way to enhance Istio's user experience and multi resource configuration validation. Istio Vet proved to be very valuable to users, so we decided to work closely with the Istio community to create istioctl analyze in order to add key configuration analysis capabilities to Istio. It’s very exciting to see that many of these ideas have now been implemented in the community and part of the available feature set for broader consumption. 

As the Istio project and its users mature, there is a greater need for open communication around product security and CVEs. Recognizing this, we were fortunate to be able to help create the early disclosure process and lead the product security working group which ensures the overall project is secure and not vulnerable to known exploits. 

We believe that an open source community thrives when users and vendors are considered as equal partners. We feel privileged that we have been able to provide feedback from our customers that has helped accelerate Istio’s evolution. In addition, it has been a privilege to share our networking expertise from our F5’s heritage with the Istio project as maintainers and leads of important functional areas and key working groups.

The technical leadership of Istio is a meritocracy and we are honored that our sustained efforts have been recognized with - my appointment to the Technical Oversight Committee

As a TOC member, I am excited to work with other Istio community leaders to focus the roadmap on solving customer problems while keeping our user experience top of mind. The next, most critical challenges to solve are day two problems and beyond where we need to ensure smoother  upgrades, enhanced security and scalability of the system. We envision use cases emerging from industries like Telco and FinServ which will  push the envelope of technical capabilities beyond what many can imagine.

It has been amazing to see the user growth and the maturity of the Istio project over the last three years. We firmly believe that a more diverse leadership and an open governance in Istio will further help to advance the project and increase participation from developers across the globe.

The fun is just getting started and I am honored to be an integral part of Istio’s journey! 


when need service mesh

When Do You Need A Service Mesh?

When You Need A Service Mesh - Aspen MeshOne of the questions I often hear is: "Do I really need a service mesh?" The honest answer is "It depends." Like nearly everything in the technology space (or more broadly "nearly everything"), this depends on the benefits and costs. But after having helped users progress from exploration to production deployments in many different scenarios, I'm here to share my perspective on which inputs to include in your decision-making process.

A service mesh provides a consistent way to connect, secure and observe microservices. Most service meshes are tightly integrated with an orchestration platform, commonly Kubernetes. There's no way around it; a service mesh is another thing, and at least part of your team will have to learn it. That's a cost, and you should compare that cost to the benefits of operational simplification you may achieve.

But apart from costs and benefits, what should you be asking in order to determine if you really need a service mesh? The number of microservices you’re running, as well as urgency and timing, can have an impact on your needs.

How Many Microservices?

If you're deploying your first or second microservice, I think it is just fine to not have a service mesh. You should, instead, focus on learning Kubernetes and factoring stateless containers out of your applications first. You will naturally build familiarity with the problems that a service mesh can solve, and that will make you much better prepared to plan your service mesh journey when the time comes.

If you have an existing application architecture that provides the observability, security and resilience that you need, then you are already in a good place. For you, the question becomes when to add a service mesh. We usually see organizations notice the toil associated with utility code to integrate each new microservice. Once that toil gets painful enough, they evaluate how they could make that integration more efficient. We advocate using a service mesh to reduce this toil.

The exact point at which service mesh benefits clearly outweigh costs varies from organization to organization. In my experience, teams often realize they need a consistent approach once they have five or six microservices. However, many users push to a dozen or more microservices before they notice the increasing cost of utility code and the increasing complexity of slight differences across their applications. And, of course, some organizations continue scaling and never choose a service mesh at all, investing in application libraries and tooling instead. On the other hand, we also work with early birds that want to get ahead of the rising complexity wave and introduce service mesh before they've got half-a-dozen microservices. But the number of microservices you have isn’t the only part to consider. You’ll also want to consider urgency and timing. 

Urgency and Timing

Another part of the answer to “When do I need a service mesh?” includes your timing. The urgency of considering a service mesh depends on your organization’s challenges and goals, but can also be considered by your current process or state of operations. Here are some states that may reduce or eliminate your urgency to use a service mesh:

  1. Your microservices are all written in one language ("monoglot") by developers in your organization, building from a common framework.
  2. Your organization dedicates engineers to building and maintaining org-specific tooling and instrumentation that's automatically built into every new microservice.
  3. You have a partially or totally monolithic architecture where application logic is built into one or two containers instead of several.
  4. You release or upgrade all-at-once after a manual integration process.
  5. You use application protocols that are not served by existing service meshes (so usually not HTTP, HTTP/2, gRPC).

On the other hand, here are some signals that you will need a service mesh and may want to start evaluating or adopting early:

  1. You have microservices written in many different languages that may not follow a common architectural pattern or framework (or you're in the middle of a language/framework migration).
  2. You're integrating third-party code or interoperating with teams that are a bit more distant (for example, across a partnership or M&A boundary) and you want a common foundation to build on.
  3. Your organization keeps "re-solving" problems, especially in the utility code (my favorite example: certificate rotation, while important, is no scrum team's favorite story in the backlog).
  4. You have robust security, compliance or auditability requirements that span services.
  5. Your teams spend more time localizing or understanding a problem than fixing it.

I consider this last point the three-alarm fire that you need a service mesh, and it's a good way to return to the quest for simplification. When an application is failing to deliver a quality experience to its users, how does your team resolve it? We work with organizations that report that finding the problem is often the hardest and most expensive part. 

What Next?

Once you've localized the problem, can you alleviate or resolve it? It's a painful situation if the only fix is to develop new code or rebuild containers under pressure. That's where you see the benefit from keeping resiliency capabilities independent of the business logic (like in a service mesh).

If this story is familiar to you, you may need a service mesh right now. If you're getting by with your existing approach, that’s great. Just keep in mind the costs and benefits of what you’re working with, and keep asking:

  1. Is what you have right now really enough, or are spending too much time trying to find problems instead of developing and providing value for your customers?
  2. Are your operations working well with the number of microservices you have, or is it time to simplify?
  3. Do you have critical problems that a service mesh would address?

Keeping tabs on the answers to these questions will help you determine if — and when — you really need a service mesh.

In the meantime if you're interested in learning more about service mesh, check out The Complete Guide to Service Mesh.


Aspen Mesh - Service Mesh Security and Complinace

Aspen Mesh 1.4.4 & 1.3.8 Security Update

Aspen Mesh is announcing the release of 1.4.4 and 1.3.8 (based on upstream Istio 1.4.4 & 1.3.8), both addressing a critical security update. All our customers are strongly encouraged to upgrade to these versions immediately based on your currently deployed Aspen Mesh version.

The Aspen Mesh team reported this CVE to the Istio community as per Istio CVE reporting guidelines, and our team was able to further contribute to the community--and for all Istio users--by fixing this issue in upstream Istio. As this CVE is extremely easy to exploit and the risk score was deemed very high (9.0), we wanted to respond  with urgency by getting a patch in upstream Istio and out to our customers as quickly as possible. This is just one of the many ways we are able to provide value to our customers as a trusted partner by ensuring that the pieces from Istio (and Aspen Mesh) are secure and stable.

Below are details about the CVE-2020-8595, steps to verify whether you’re currently vulnerable, and how to upgrade to the patched releases.

CVE Description

A bug in Istio's Authentication Policy exact path matching logic allows unauthorized access to resources without a valid JWT token. This bug affects all versions of Istio (and Aspen Mesh) that support JWT Authentication Policy with path based trigger rules (all 1.3 & 1.4 releases). The logic for the exact path match in the Istio JWT filter includes query strings or fragments instead of stripping them off before matching. This means attackers can bypass the JWT validation by appending “?” or “#” characters after the protected paths.

Example Vulnerable Istio Configuration

Configure JWT Authentication Policy triggers on an exact HTTP path match "/productpage" like this. In this example, the Authentication Policy is applied at the ingress gateway service so that any requests with the exact path matching “/productpage” requires a valid JWT token. In the absence of a valid JWT token, the request is denied and never forwarded to the productpage service. However, due to this CVE, any request made to the ingress gateway with path "/productpage?" without any valid JWT token is not denied but sent along to the productpage service, thereby allowing access to a protected resource without a valid JWT token. 

Validation

Since this vulnerability is in the Envoy filter added by Istio, you can also check the proxy image you’re using in the cluster locally to see if you’re currently vulnerable. Download and run this script to verify if the proxy image you’re using is vulnerable. Thanks to Francois Pesce from the Google Istio team in helping us create this test script.

Upgrading to This Version

Please follow upgrade instructions in our documentation to upgrade to these versions. Since this vulnerability affects the sidecar proxies and gateways (ingress and egress), it is important to follow the post-upgrade tasks here and rolling upgrade all of your sidecar proxies and gateways.


Aspen Mesh Policy Framework - After

Announcing Aspen Mesh Secure Ingress Policy

In the new Application Economy, developers are building and deploying applications at a far greater frequency than ever before. Organizations gain this agility by shifting to microservices architectures, powered by Kubernetes and continuous integration and delivery (CI/CD). For businesses to derive value from these applications, they need to be exposed to the outside world in a secure way so that their customers can access them--and have a great user experience. That’s such an obvious statement, you’re probably wondering why I even bothered saying it.

Well, within most organizations, securely exposing an application to the outside world is complicated. Ports, protocols, paths, and auth requirements need to be collected. Traffic routing resources and authentication policies need to be configured. DNS entries and TLS certificates need to be created and mounted. Application teams know some of these things and platform owners know others. Pulling all of this together is painful and time consuming.

This problem is exacerbated by the lack of APIs mapping intent to the persona performing the task. Let’s take a quick look at the current landscape.

Kubernetes allows orchestration (deploying and upgrading) of applications but doesn't provide any capabilities to capture application behavior like protocol, paths and security requirements for these APIs. For securely exposing an application to their users, platform operators currently need to capture this information from developers in private conversations and create additional Kubernetes resources like Ingress which in turn creates the plumbing to allow traffic from outside the cluster and route it to the appropriate backend application. Alternatively, advanced routing capabilities in Istio make it possible to control more aspects of traffic management, and at the same time allows developers to offload functionality like JWT Authentication from their applications to the infrastructure.

But the missing piece in both of these scenarios is a reliable and scalable way of gathering information about the applications from the developers independent of platform operators and enabling platform operators to securely configure the resources they need to expose these applications.

Additionally, configuring the Ingress or Istio routing APIs, is only a part of the puzzle. Operators also need to set up domain names (DNS) and get domain certificates (static or dynamic via Let’s Encrypt for example) in order to secure the traffic getting into their clusters. All of this requires managing a lot of moving pieces with the possibility of failures in multiple steps on the way.

Aspen Mesh Policy Framework - Before

 

To solve these challenges, we are excited to announce the release of Aspen Mesh Secure Ingress Policy.

A New Way to Securely Expose Your Applications

Our goal for developing this new Secure Ingress Policy framework is to help streamline communication between application developers and platform operators. With this new feature, both principals can be productive, but also work together.

The way it works is application developers provide a specification for their service which they can store in their code management systems and communicate to Aspen Mesh through an Application API. This spec includes the service port and protocols to expose via Ingress and the API paths and authentication requirements (e.g., JWT validation).

Platform operators provide a specification that defines the security and networking aspects of the platform and communicates it to Aspen Mesh via a Secure Ingress API. This spec includes certificate secrets, domain name, and JWKS server and issuer.

Aspen Mesh Policy Framework - After

 

Aspen Mesh takes these inputs and creates all of the necessary system resources like Istio Gateways, VirtualServices, Authentication Policies, configuring DNS entries and retrieving certificates to enable secure access to the application. If you have configured these by hand, you know the complexity involved in getting this right. With this new feature, we want our customers to focus on what’s important to them and let Aspen Mesh take care of your infrastructure needs. Additionally, the Aspen Mesh controllers always keep the resources in sync and augment them as the Secure Ingress and Application resources are updated.

Another important benefit of mapping APIs to personas is the ability to create ownership and storing configuration in the right place.  Keep the application-centric specifications in code right next to the application so that you can review and update both as part of your normal code review process. You don’t need another process or workflow to apply configuration changes out-of-band with your code changes. Also because these things live together, they can naturally be deployed at the same time, thereby reducing misconfigurations.

The overarching goals for our APIs were to enable platform operators to retain the strategic point of control to enforce policies while allowing application developers to move quickly and deliver customer-facing features. And most importantly, allowing our customer’s customers to use the application reliably and securely. 

Today, we're natively integrated with AWS Route 53 and expect to offer integrations with Azure and GCP in the near future. We also retrieve and renew domain certificates from Let’s Encrypt, and the only thing that operators need to provide is their registered email address and the rest is handled by the Aspen Mesh control plane. 

Interested in learning more about Secure Ingress Policy? Reach out to one of our experts to learn more about how you can implement Aspen Mesh’s Secure Ingress Policies at your organization.