you've got kubernetes promotional graphic

Get App-focused Security from an Enterprise-class Service Mesh | On-demand Webinar

In our webinar you can now view on demand, You’ve Got Kubernetes. Now you Need App-focused Security using Istio, we teamed with Mirantis, an industry leader in enterprise-ready Kubernetes deployment and management, to talk about security, Kubernetes, service mesh, istio and more. If you have Kubernetes, you’re off to a great start with a great platform for security based on Microsegmentation and Network Policy. But firewalls and perimeters aren’t enough -- even in their modern, in-cluster form.  

As enterprises embark on the cloud journey, modernizing applications with microservices and containers running on Kubernetes is key to application portability, code reuse and automation. But along with these advantages come significant security and operational challenges due to security threats at various layers of the stack. While Kubernetes platform providers like Mirantis manage security at the infrastructure, orchestration and container level, the challenge at application services level remains a concern. This is where a service mesh comes in. 

Companies with a hyper focus on security – like those in healthcare, finance, government, and highly regulated industries – demand the highest level of security possible to thwart cyberthreats, data breaches and non-compliance issues. You can up level your security by adding a service mesh that’s able to secure thousands of connections between microservices containers inside of a single cluster or across the globe. Today Istio is the gold standard for enterprise-class service mesh for building Zero Trust Security. But I’m not the first to say that implementing open source Istio has its challenges -- and can cause a lot of headaches when Istio deployment and management is added to a DevOps team’s workload without some forethought.  

Aspen Mesh delivers an Istio-based, security hardened enterprise-class service mesh that’s easy to manage. Our Istio solution reduces friction between the experts in your organization because it understands your apps -- and it seamlessly integrates into your SecOps approach & certificate authority architecture. 

It’s not just about what knobs and config you adjust to get mTLS in one cluster – in our webinar we covered the architectural implications and lessons learned that’ll help you fit service mesh into your up-leveled Kubernetes security journey. It was a lively discussion with a lot of questions from attendees. Click the link below to watch the live webinar recording.

-Andrew

 

Click to watch webinar now:

On Demand Webinar | You’ve Got Kubernetes. Now you Need App-focused Security using Istio.

 The webinar gets technical as we delve into: 

  • How Istio controls North-South and East-West traffic, and how it relates to application-level traffic. 
  • How Istio secures communication between microservices. 
  • How to simplify operations and prevent security holes as the number of microservices in production grows. 
  • What is involved in hardening Istio into an enterprise-class service mesh. 
  • How mTLS provides zero-trust based approach to security. 
  • How Aspen Mesh uses crypto to give each container its own identity (using a framework called SPIFFE). Then when containers talk to each other through the service mesh, they prove who they are cryptographically. 
  • Secure ingress and egress, and Cloud Native packet capture. 

How to Capture Packets that Don't Exist

How to Capture Packets that Don’t Exist

One of my favorite networking tools is Wireshark; it shows you a packet-by-packet view of what’s going on in your network. Wireshark’s packet capture view is the lowest level and most extensive you can get before you have to bust out the oscilloscope. This practice is well-established in the pre-Kubernetes world, but it has some challenges if you’re moving to a Cloud Native environment. If you are using or moving to Cloud Native, you’re going to want to use packet-level tools and techniques in any environment, so that’s why we built Aspen Mesh Packet Inspector. It’s designed to address these challenges across various environments, so you can more easily see what’s going on in your network without the complexity.  

Let me explain the challenges facing our users moving into a Kubernetes world. It’s important to note that there are two parts to a troubleshooting session based on packet capture: actually capturing the packets, and then loading them into your favorite tool. Aspen Mesh Packet Inspector enables users to capture packets even in Kubernetes. That’s the part you need to address to power all the existing tools you probably already have.  Leveraging the existing tools is important as our customers have invested heavily in them.  And not just monetarily – their reputation for reliable apps, services and networks depend on the reliability and usefulness of the tools, procedures and experience powered by a packet view. 

What’s so hard about capturing these packets in modern app architectures on Kubernetes? The two biggest challenges are that these packets may never be actual packets that go through a switch, and even if they were, they’d be encrypted and useless. 

Outside of the Kubernetes world, there are many different approaches to capture packets. You can capture packets right on your PC to debug a local issue. For serious network debugging, you’re usually capturing packets directly on networking hardware, like a monitor port on a switch, or dedicated packet taps or brokers. But in Kubernetes, some traffic will never hit a dedicated switch or tap. Kubernetes is used to schedule multiple containers onto the same physical or virtual machine. If one container wants to talk to another container that happens to be on the same machine, then the packets exchanged between them are virtual – they're just bytes in RAM that the operating system shuffles between containers. 

There’s no guarantee that the two containers that you care about will be scheduled onto the same machine, and there’s no guarantee that they won’t beIn fact, if you know two containers are going to want to talk to each other a lot, it’s a good idea to encourage scheduling on the same node for performance: these virtual packets don’t consume any capacity on your switch and advanced techniques can accelerate container-to-container traffic inside a machine. 

Customers that stake their reputation on reliability don’t like mixing “critical tool” and “no guarantee”.  They need to capture traffic right at the edge of the container. That’s what Aspen Mesh Packet Inspector does. It’s built into Carrier-Grade Aspen Mesh, a service mesh purpose built for these critical applications. 

There’s still a problem though – if you are building apps on Kubernetes, you should be encrypting traffic between pods. It’s a best practice that is also required by various standards including those behind 5G.  In the past, capture tools have relied on access to the encryption key to show the decrypted info. New encryption like TLS1.3 has a feature called “forward secrecy” that impedes this. Forward secrecy means every connection is protected with its own temporary key that was securely created by the client and the server – if your tool wasn’t in-the-middle when this key was generated, it’s too late. Access to the server’s encryption key later won't work. 

One approach is to force a broker or tap into the middle for all connections. But that means you need a powerful (i.e. expensive) broker, and it’s a single-point-of-failure. Worse, it’s a security single-point-of-failure: everything in the network has to trust it to get in the middle of all conversations. 

Our users already have something better suited – an Aspen Mesh sidecar (built on Envoy). They’re already using a sidecar next to each container to offload encryption using strong techniques like mutual TLS with forward secrecy. Each sidecar has only one security identity for the particular app container it is protecting, so sidecars can safely authenticate each other without any trusted-box-in-the-middle games. 

That’s the second key part of Aspen Mesh Packet Inspector – because Aspen Mesh is where the plaintext-to-encrypted operation happens (right before leaving the Kubernetes pod), we can record the plaintext. We capture the plaintext and slice it into virtual packets (in a standard “pcap” format). When we feed it to a capture system like a packet broker, we use mutual TLS to protect the captured data.  Our users combine this with a secure packet broker, and get to see the plaintext that was safely and securely transported all the way from the container edge to their screen. 

If you’re a service provider operating Kubernetes at scale, packet tapping capabilities are critical for you to be able to operate the networks effectively, securely and within regulatory and compliance standards. Aspen Mesh Packet Inspector provides the missing link in Kubernetes, providing full packet visibility for troubleshooting and meeting lawful intercept requirements.  


doubling down on istio

Doubling Down On Istio

Good startups believe deeply that something is true about the future, and organize around it.

When we founded Aspen Mesh as a startup inside of F5, my co-founders and I believed these things about the future:

  1. App developers would accelerate their pace of innovation by modularizing and building APIs between modules packaged in containers.
  2. Kubernetes APIs would become the lingua franca for describing app and infrastructure deployments and Kubernetes would be the best platform for those APIs.
  3. The most important requirement for accelerating is to preserve control without hindering modularity, and that’s best accomplished as close to the app as possible.

We built Aspen Mesh to address item 3. If you boil down reams of pitch decks, board-of-directors updates, marketing and design docs dating back to summer of 2017, that's it. That's what we believe, and I still think we're right.

Aspen Mesh is a service mesh company, and the lowest levels of our product are the open-source service mesh Istio. Istio has plenty of fans and detractors; there are plenty of legitimate gripes and more than a fair share of uncertainty and doubt (as is the case with most emerging technologies). With that in mind, I want to share why we selected Istio and Envoy for Aspen Mesh, and why we believe more strongly than ever that they're the best foundation to build on.

 

Why a service mesh at all?

A service mesh is about connecting microservices. The acceleration we're talking about relies on applications that are built out of small units (predominantly containers) that can be developed and owned by a single team. Stitching these units into an overall application requires APIs between them. APIs are the contract. Service Mesh measures and assists contract compliance. 

There's more to it than reading the 12-factor app. All these microservices have to effectively communicate to actually solve a user's problem. Communication over HTTP APIs is well supported in every language and environment so it has never been easier to get started.  However, don't let the simplicity delude: you are now building a distributed system. 

We don't believe the right approach is to demand deep networking and infrastructure expertise from everyone who wants to write a line of code.  You trade away the acceleration enabled by containers for an endless stream of low-level networking challenges (as much as we love that stuff, our users do not). Instead, you should preserve control by packaging all that expertise into a technology that lives as close to the application as possible. For Kubernetes-based applications, this is a common communication enhancement layer called a service mesh.

How close can you get? Today, we see users having the most success with Istio's sidecar container model. We forecasted that in 2017, but we believe the concept ("common enhancement near the app") will outlive the technical details.

This common layer should observe all the communication the app is making; it should secure that communication and it should handle the burdens of discovery, routing, version translation and general interoperability. The service mesh simplifies and creates uniformity: there's one metric for "HTTP 200 OK rate", and it's measured, normalized and stored the same way for every app. Your app teams don't have to write that code over and over again, and they don't have to become experts in retry storms or circuit breakers. Your app teams are unburdened of infrastructure concerns so they can focus on the business problem that needs solving.  This is true whether they write their apps in Ruby, Python, node.js, Go, Java or anything else.

That's what a service mesh is: a communication enhancement layer that lives as close to your microservice as possible, providing a common approach to controlling communication over APIs.

 

Why Istio?

Just because you need a service mesh to secure and connect your microservices doesn't mean Envoy and Istio are the only choice.  There are many options in the market when it comes to service mesh, and the market still seems to be expanding rather than contracting. Even with all the choices out there, we still think Istio and Envoy are the best choice.  Here's why.

We launched Aspen Mesh after learning some lessons with a precursor product. We took what we learned, re-evaluated some of our assumptions and reconsidered the biggest problems development teams using containers were facing. It was clear that users didn't have a handle on managing the traffic between microservices and saw there weren't many using microservices in earnest yet so we realized this problem would get more urgent as microservices adoption increased. 

So, in 2017 we asked what would characterize the technology that solved that problem?

We compared our own nascent work with other purpose-built meshes like Linkerd (in the 1.0 Scala-based implementation days) and Istio, and non-mesh proxies like NGINX and HAProxy. This was long before service mesh options like Consul, Maesh, Kuma and OSM existed. Here's what we thought was important:

  • Kubernetes First: Kubernetes is the best place to position a service mesh close to your microservice. The architecture should support VMs, but it should serve Kubernetes first.
  • Sidecar "bookend" Proxy First: To truly offload responsibility to the mesh, you need a datapath element as close as possible to the client and server.
  • Kubernetes-style APIs are Key: Configuration APIs are a key cost for users.  Human engineering time is expensive. Organizations are judicious about what APIs they ask their teams to learn. We believe Kubernetes API design and mechanics got it right. If your mesh is deployed in Kubernetes, your API needs to look and feel like Kubernetes.
  • Open Source Fundamentals: Customers will want to know that they are putting sustainable and durable technology at the core of their architecture. They don't want a technical dead-end. A vibrant open source community ensures this via public roadmaps, collaboration, public security audits and source code transparency.
  • Latency and Efficiency: These are performance keys that are more important than total throughput for modern applications.

As I look back at our documented thoughts, I see other concerns, too (p99 latency in languages with dynamic memory management, layer 7 programmability). But the above were the key items that we were willing to bet on. So it became clear that we had to palace our bet on Istio and Envoy. 

Today, most of that list seems obvious. But in 2017, Kubernetes hadn’t quite won. We were still supporting customers on Mesos and Docker Datacenter. The need for service mesh as a technology pattern was becoming more obvious, but back then Istio was novel - not mainstream. 

I'm feeling very good about our bets on Istio and Envoy. There have been growing pains to be sure. When I survey the state of these projects now, I see mature, but not stagnant, open source communities.  There's a plethora of service mesh choices, so the pattern is established.  Moreover the continued prevalence of Istio, even with so many other choices, convinces me that we got that part right.

 

But what about...?

While Istio and Envoy are a great fit for all those bullets, there are certainly additional considerations. As with most concerns in a nascent market, some are legitimate and some are merely noise. I'd like to address some of the most common that I hear from conversations with users.

"I hear the control plane is too complex" - We hear this one often. It’s largely a remnant of past versions of Istio that have been re-architected to provide something much simpler, but there's always more to do. We're always trying to simplify. The two major public steps that Istio has taken to remedy this include removing standalone Mixer, and co-locating several control plane functions into a single container named istiod.

However, there's some stuff going on behind the curtains that doesn't get enough attention. Kubernetes makes it easy to deploy multiple containers. Personally, I suspect the root of this complaint wasn't so much "there are four running containers when I install" but "Every time I upgrade or configure this thing, I have to know way too many details."  And that is fixed by attention to quality and user-focus. Istio has made enormous strides in this area. 

"Too many CRDs" - We've never had an actual user of ours take issue with a CRD count (the set of API objects it's possible to define). However, it's great to minimize the number of API objects you may have to touch to get your application running. Stealing a paraphrasing of Einstein, we want to make it as simple as possible, but no simpler. The reality: Istio drastically reduced the CRD count with new telemetry integration models (from "dozens" down to 23, with only a handful involved in routine app policies). And Aspen Mesh offers a take on making it even simpler with features like SecureIngress that map CRDs to personas - each persona only needs to touch 1 custom resource to expose an app via the service mesh.

"Envoy is a resource hog" - Performance measurement is a delicate art. The first thing to check is that wherever you're getting your info from has properly configured the system-under-measurement.  Istio provides careful advice and their own measurements here.  Expect latency additions in the single-digit-millisecond range, knowing that you can opt parts of your application out that can't tolerate even that. Also remember that Envoy is doing work, so some CPU and memory consumption should be considered a shift or offload rather than an addition. Most recent versions of Istio do not have significantly more overhead than other service meshes, but Istio does provide twice as many feature, while also being available in or integrating with many more tools and products in the market. 

"Istio is only for really complicated apps” - Sure. Don’t use Istio if you are only concerned with a single cluster and want to offload one thing to the service mesh. People move to Kubernetes specifically because they want to run several different things. If you've got a Money-Making-Monolith, it makes sense to leave it right where it is in a lot of cases. There are also situations where ingress or an API gateway is all you need. But if you've got multiple apps, multiple clusters or multiple app teams then Kubernetes is a great fit, and so is a service mesh, especially as you start to run things at greater scale.

In scenarios where you need a service mesh, it makes sense to use the service mesh that gives you a full suite of features. A nice thing about Istio is you can consume it piecemeal - it does not have to be implemented all at once. So you only need mTLS and tracing now? Perfect. You can add mTLS and tracing now and have the option to add metrics, canary, traffic shifting, ingress, RBAC, etc. when you need it.

We’re excited to be on the Istio journey and look forward to continuing to work with the open source community and project to continue advancing service mesh adoption and use cases. If you have any particular question I didn’t cover, feel free to reach out to me at @notthatjenkins. And I'm always happy to chat about the best way to get started on or continue with service mesh implementation. 


steering future of istio

Steering The Future Of Istio

I’m honored to have been chosen by the Istio community to serve on the Istio Steering Committee along with Christian Posta, Zack Butcher and Zhonghu Xu. I have been fortunate to contribute to the Istio project for nearly three years and am excited by the huge strides the project has made in solving key challenges that organizations face as they shift to cloud-native architecture. 

Maybe what’s most exciting is the future direction of the project. The core Istio community realizes and advocates that innovation in Open Source doesn't stop with technology - it’s just the starting point. New and innovative ways of growing the community include making contributions easier, Working Group meetings more accessible and community meetings an open platform for end users to give their feedback. As a member of the steering committee, one of my main goals will be to make it easier for a diverse group of people to more easily contribute to the project.

Sharing my personal journey with Istio, when I started contributing to Istio, I found it intimidating to present rough ideas or proposals in an open Networking WG meeting filled with experts and leaders from Google & IBM (even though they were very welcoming). I understand how difficult it can be to get started on contributing to a new community, so I want to ensure the Working Group and community meetings are a place for end users and new contributors to share ideas openly, and also to learn from industry experts. I will focus on increasing participation from diverse groups, through working to make Istio the most welcoming community possible. In this vein, it will be important for the Steering Committee to further define and enforce a code of conduct creating a safe place for all contributors.

The Istio community’s effort towards increasing open governance by ensuring no single organization has control over the future of the project has certainly been a step in the right direction with the new makeup of the steering committee. I look forward to continuing work in this area to make Istio the most open project it can be. 

Outside of code contributions, marketing and brand identity are critically important aspects of any open source project. It will be important to encourage contributions from marketing and business leaders to ensure we recognize non-technical contributions. Addressing this is less straightforward than encouraging and crediting code commits, but a diverse vendor neutral marketing team in Open Source can create powerful ways to reach users and drive adoption, which is critical to the success of any open source project. Recent user empathy sessions and user survey forms are a great starting point, but our ability to put these learning into actions and adapt as a community will be a key driver in growing project participation.

Last, but definitely not least, I’m keen to leverage my experience and feedback from years of work with Aspen Mesh customers and broad enterprise experience to make Istio a more robust and production-ready project. 

In this vein, my fellow Aspen Mesher Jacob Delgado has worked tirelessly for many months contributing to Istio. As a result of his contributions, he has been named a co-lead for the Istio Product Security Working Group. Jacob has been instrumental in championing security best practices for the project and has also helped responsibly remediate several CVEs this year. I’m excited to see more contributors like Jacob make significant improvements to the project.

I'm humbled by the support of the community members who voted in the steering elections and chose such a talented team to shepherd Istio forward. I look forward to working with all the existing, and hopefully many new, members of the Istio community! You can always reach out to me through email, Twitter or Istio Slack for any community, technical or governance matter, or if you just want to chat about a great idea you have.


Helping Istio Sail

Around three years ago, we recognized the power of service mesh as a technology pattern to help organizations manage the next generation of software delivery challenges, and that led to us founding Aspen Mesh. And we know that efficiently managing high-scale Kubernetes applications requires the power of Istio. Having been a part of the Istio project since 0.2, we have seen countless users and customers benefit from the observability, security and traffic management capabilities that a service mesh like Istio provides. 

Our engineering team has worked closely with the Istio community over the past three years to help drive the stability and add new features that make Istio increasingly easy for users to adopt. We believe in the value that open source software — and an open community  —  brings to the enterprise service mesh space, and we are driven to help lead that effort through setting sound technical direction. By contributing expertise, code and design ideas like Virtual Service delegation and enforcing RBAC policies for Istio resources, we have focused much of our work on making Istio more enterprise-ready. One of these contributions includes Istio Vet, which was created and open sourced in early days of Aspen Mesh as a way to enhance Istio's user experience and multi resource configuration validation. Istio Vet proved to be very valuable to users, so we decided to work closely with the Istio community to create istioctl analyze in order to add key configuration analysis capabilities to Istio. It’s very exciting to see that many of these ideas have now been implemented in the community and part of the available feature set for broader consumption. 

As the Istio project and its users mature, there is a greater need for open communication around product security and CVEs. Recognizing this, we were fortunate to be able to help create the early disclosure process and lead the product security working group which ensures the overall project is secure and not vulnerable to known exploits. 

We believe that an open source community thrives when users and vendors are considered as equal partners. We feel privileged that we have been able to provide feedback from our customers that has helped accelerate Istio’s evolution. In addition, it has been a privilege to share our networking expertise from our F5’s heritage with the Istio project as maintainers and leads of important functional areas and key working groups.

The technical leadership of Istio is a meritocracy and we are honored that our sustained efforts have been recognized with - my appointment to the Technical Oversight Committee

As a TOC member, I am excited to work with other Istio community leaders to focus the roadmap on solving customer problems while keeping our user experience top of mind. The next, most critical challenges to solve are day two problems and beyond where we need to ensure smoother  upgrades, enhanced security and scalability of the system. We envision use cases emerging from industries like Telco and FinServ which will  push the envelope of technical capabilities beyond what many can imagine.

It has been amazing to see the user growth and the maturity of the Istio project over the last three years. We firmly believe that a more diverse leadership and an open governance in Istio will further help to advance the project and increase participation from developers across the globe.

The fun is just getting started and I am honored to be an integral part of Istio’s journey! 


when need service mesh

When Do You Need A Service Mesh?

When You Need A Service Mesh - Aspen MeshOne of the questions I often hear is: "Do I really need a service mesh?" The honest answer is "It depends." Like nearly everything in the technology space (or more broadly "nearly everything"), this depends on the benefits and costs. But after having helped users progress from exploration to production deployments in many different scenarios, I'm here to share my perspective on which inputs to include in your decision-making process.

A service mesh provides a consistent way to connect, secure and observe microservices. Most service meshes are tightly integrated with an orchestration platform, commonly Kubernetes. There's no way around it; a service mesh is another thing, and at least part of your team will have to learn it. That's a cost, and you should compare that cost to the benefits of operational simplification you may achieve.

But apart from costs and benefits, what should you be asking in order to determine if you really need a service mesh? The number of microservices you’re running, as well as urgency and timing, can have an impact on your needs.

How Many Microservices?

If you're deploying your first or second microservice, I think it is just fine to not have a service mesh. You should, instead, focus on learning Kubernetes and factoring stateless containers out of your applications first. You will naturally build familiarity with the problems that a service mesh can solve, and that will make you much better prepared to plan your service mesh journey when the time comes.

If you have an existing application architecture that provides the observability, security and resilience that you need, then you are already in a good place. For you, the question becomes when to add a service mesh. We usually see organizations notice the toil associated with utility code to integrate each new microservice. Once that toil gets painful enough, they evaluate how they could make that integration more efficient. We advocate using a service mesh to reduce this toil.

The exact point at which service mesh benefits clearly outweigh costs varies from organization to organization. In my experience, teams often realize they need a consistent approach once they have five or six microservices. However, many users push to a dozen or more microservices before they notice the increasing cost of utility code and the increasing complexity of slight differences across their applications. And, of course, some organizations continue scaling and never choose a service mesh at all, investing in application libraries and tooling instead. On the other hand, we also work with early birds that want to get ahead of the rising complexity wave and introduce service mesh before they've got half-a-dozen microservices. But the number of microservices you have isn’t the only part to consider. You’ll also want to consider urgency and timing. 

Urgency and Timing

Another part of the answer to “When do I need a service mesh?” includes your timing. The urgency of considering a service mesh depends on your organization’s challenges and goals, but can also be considered by your current process or state of operations. Here are some states that may reduce or eliminate your urgency to use a service mesh:

  1. Your microservices are all written in one language ("monoglot") by developers in your organization, building from a common framework.
  2. Your organization dedicates engineers to building and maintaining org-specific tooling and instrumentation that's automatically built into every new microservice.
  3. You have a partially or totally monolithic architecture where application logic is built into one or two containers instead of several.
  4. You release or upgrade all-at-once after a manual integration process.
  5. You use application protocols that are not served by existing service meshes (so usually not HTTP, HTTP/2, gRPC).

On the other hand, here are some signals that you will need a service mesh and may want to start evaluating or adopting early:

  1. You have microservices written in many different languages that may not follow a common architectural pattern or framework (or you're in the middle of a language/framework migration).
  2. You're integrating third-party code or interoperating with teams that are a bit more distant (for example, across a partnership or M&A boundary) and you want a common foundation to build on.
  3. Your organization keeps "re-solving" problems, especially in the utility code (my favorite example: certificate rotation, while important, is no scrum team's favorite story in the backlog).
  4. You have robust security, compliance or auditability requirements that span services.
  5. Your teams spend more time localizing or understanding a problem than fixing it.

I consider this last point the three-alarm fire that you need a service mesh, and it's a good way to return to the quest for simplification. When an application is failing to deliver a quality experience to its users, how does your team resolve it? We work with organizations that report that finding the problem is often the hardest and most expensive part. 

What Next?

Once you've localized the problem, can you alleviate or resolve it? It's a painful situation if the only fix is to develop new code or rebuild containers under pressure. That's where you see the benefit from keeping resiliency capabilities independent of the business logic (like in a service mesh).

If this story is familiar to you, you may need a service mesh right now. If you're getting by with your existing approach, that’s great. Just keep in mind the costs and benefits of what you’re working with, and keep asking:

  1. Is what you have right now really enough, or are spending too much time trying to find problems instead of developing and providing value for your customers?
  2. Are your operations working well with the number of microservices you have, or is it time to simplify?
  3. Do you have critical problems that a service mesh would address?

Keeping tabs on the answers to these questions will help you determine if — and when — you really need a service mesh.

In the meantime if you're interested in learning more about service mesh, check out The Complete Guide to Service Mesh.


Aspen Mesh - Service Mesh Security and Complinace

Aspen Mesh 1.4.4 & 1.3.8 Security Update

Aspen Mesh is announcing the release of 1.4.4 and 1.3.8 (based on upstream Istio 1.4.4 & 1.3.8), both addressing a critical security update. All our customers are strongly encouraged to upgrade to these versions immediately based on your currently deployed Aspen Mesh version.

The Aspen Mesh team reported this CVE to the Istio community as per Istio CVE reporting guidelines, and our team was able to further contribute to the community--and for all Istio users--by fixing this issue in upstream Istio. As this CVE is extremely easy to exploit and the risk score was deemed very high (9.0), we wanted to respond  with urgency by getting a patch in upstream Istio and out to our customers as quickly as possible. This is just one of the many ways we are able to provide value to our customers as a trusted partner by ensuring that the pieces from Istio (and Aspen Mesh) are secure and stable.

Below are details about the CVE-2020-8595, steps to verify whether you’re currently vulnerable, and how to upgrade to the patched releases.

CVE Description

A bug in Istio's Authentication Policy exact path matching logic allows unauthorized access to resources without a valid JWT token. This bug affects all versions of Istio (and Aspen Mesh) that support JWT Authentication Policy with path based trigger rules (all 1.3 & 1.4 releases). The logic for the exact path match in the Istio JWT filter includes query strings or fragments instead of stripping them off before matching. This means attackers can bypass the JWT validation by appending “?” or “#” characters after the protected paths.

Example Vulnerable Istio Configuration

Configure JWT Authentication Policy triggers on an exact HTTP path match "/productpage" like this. In this example, the Authentication Policy is applied at the ingress gateway service so that any requests with the exact path matching “/productpage” requires a valid JWT token. In the absence of a valid JWT token, the request is denied and never forwarded to the productpage service. However, due to this CVE, any request made to the ingress gateway with path "/productpage?" without any valid JWT token is not denied but sent along to the productpage service, thereby allowing access to a protected resource without a valid JWT token. 

Validation

Since this vulnerability is in the Envoy filter added by Istio, you can also check the proxy image you’re using in the cluster locally to see if you’re currently vulnerable. Download and run this script to verify if the proxy image you’re using is vulnerable. Thanks to Francois Pesce from the Google Istio team in helping us create this test script.

Upgrading to This Version

Please follow upgrade instructions in our documentation to upgrade to these versions. Since this vulnerability affects the sidecar proxies and gateways (ingress and egress), it is important to follow the post-upgrade tasks here and rolling upgrade all of your sidecar proxies and gateways.


Aspen Mesh Policy Framework - After

Announcing Aspen Mesh Secure Ingress Policy

In the new Application Economy, developers are building and deploying applications at a far greater frequency than ever before. Organizations gain this agility by shifting to microservices architectures, powered by Kubernetes and continuous integration and delivery (CI/CD). For businesses to derive value from these applications, they need to be exposed to the outside world in a secure way so that their customers can access them--and have a great user experience. That’s such an obvious statement, you’re probably wondering why I even bothered saying it.

Well, within most organizations, securely exposing an application to the outside world is complicated. Ports, protocols, paths, and auth requirements need to be collected. Traffic routing resources and authentication policies need to be configured. DNS entries and TLS certificates need to be created and mounted. Application teams know some of these things and platform owners know others. Pulling all of this together is painful and time consuming.

This problem is exacerbated by the lack of APIs mapping intent to the persona performing the task. Let’s take a quick look at the current landscape.

Kubernetes allows orchestration (deploying and upgrading) of applications but doesn't provide any capabilities to capture application behavior like protocol, paths and security requirements for these APIs. For securely exposing an application to their users, platform operators currently need to capture this information from developers in private conversations and create additional Kubernetes resources like Ingress which in turn creates the plumbing to allow traffic from outside the cluster and route it to the appropriate backend application. Alternatively, advanced routing capabilities in Istio make it possible to control more aspects of traffic management, and at the same time allows developers to offload functionality like JWT Authentication from their applications to the infrastructure.

But the missing piece in both of these scenarios is a reliable and scalable way of gathering information about the applications from the developers independent of platform operators and enabling platform operators to securely configure the resources they need to expose these applications.

Additionally, configuring the Ingress or Istio routing APIs, is only a part of the puzzle. Operators also need to set up domain names (DNS) and get domain certificates (static or dynamic via Let’s Encrypt for example) in order to secure the traffic getting into their clusters. All of this requires managing a lot of moving pieces with the possibility of failures in multiple steps on the way.

Aspen Mesh Policy Framework - Before

 

To solve these challenges, we are excited to announce the release of Aspen Mesh Secure Ingress Policy.

A New Way to Securely Expose Your Applications

Our goal for developing this new Secure Ingress Policy framework is to help streamline communication between application developers and platform operators. With this new feature, both principals can be productive, but also work together.

The way it works is application developers provide a specification for their service which they can store in their code management systems and communicate to Aspen Mesh through an Application API. This spec includes the service port and protocols to expose via Ingress and the API paths and authentication requirements (e.g., JWT validation).

Platform operators provide a specification that defines the security and networking aspects of the platform and communicates it to Aspen Mesh via a Secure Ingress API. This spec includes certificate secrets, domain name, and JWKS server and issuer.

Aspen Mesh Policy Framework - After

 

Aspen Mesh takes these inputs and creates all of the necessary system resources like Istio Gateways, VirtualServices, Authentication Policies, configuring DNS entries and retrieving certificates to enable secure access to the application. If you have configured these by hand, you know the complexity involved in getting this right. With this new feature, we want our customers to focus on what’s important to them and let Aspen Mesh take care of your infrastructure needs. Additionally, the Aspen Mesh controllers always keep the resources in sync and augment them as the Secure Ingress and Application resources are updated.

Another important benefit of mapping APIs to personas is the ability to create ownership and storing configuration in the right place.  Keep the application-centric specifications in code right next to the application so that you can review and update both as part of your normal code review process. You don’t need another process or workflow to apply configuration changes out-of-band with your code changes. Also because these things live together, they can naturally be deployed at the same time, thereby reducing misconfigurations.

The overarching goals for our APIs were to enable platform operators to retain the strategic point of control to enforce policies while allowing application developers to move quickly and deliver customer-facing features. And most importantly, allowing our customer’s customers to use the application reliably and securely. 

Today, we're natively integrated with AWS Route 53 and expect to offer integrations with Azure and GCP in the near future. We also retrieve and renew domain certificates from Let’s Encrypt, and the only thing that operators need to provide is their registered email address and the rest is handled by the Aspen Mesh control plane. 

Interested in learning more about Secure Ingress Policy? Reach out to one of our experts to learn more about how you can implement Aspen Mesh’s Secure Ingress Policies at your organization.


Protocol Sniffing Service Mesh

Protocol Sniffing in Production

Istio 1.3 introduced a new capability to automatically sniff the protocol used when two containers communicate. This is a powerful benefit to easily get started with Istio, but it has some tradeoffs.  Aspen Mesh recommends that production deployments of Aspen Mesh (built on Istio) do not use protocol sniffing, and Aspen Mesh 1.3.3-am2 turns off protocol sniffing by default. This blog explains the tradeoffs and the reasoning we think turning off protocol sniffing is the better tradeoff.  

What Protocol Sniffing Is

Protocol sniffing predates Istio. For our purposes, we're going to define it as examining some communication stream and classifying it as implementing one protocol (like HTTP) or another (like SSH), without additional information. For example, here's two streams from client to server, if you've ever debugged these protocols you won't have a hard time telling them apart:

Protocol Sniffing Service Mesh

In an active service mesh, the Envoy sidecars will be handling thousands of these streams a second.  The sidecar is a proxy, so it reads every byte in the stream from one side, examines it, applies policy to it and then sends it on.  In order to apply proper policy ("Send all PUTs to /create_* to create-handler.foo.svc.cluster.local"), Envoy needs to understand the bytes it is reading.  Without protocol sniffing, that's done by configuring Envoy:

  • Layer 7 (HTTP): "All streams with a destination port of 8000 are going to follow the HTTP protocol"
  • Layer 4 (SSH): "All streams with a destination port of 22 are going to follow the SSH protocol"

When Envoy sees a stream with destination port 8000, it reads each byte and runs its own HTTP protocol implementation to understand those bytes and then apply policy.  Port 22 has SSH traffic; Envoy doesn't have an SSH protocol implementation so Envoy treats it as opaque TCP traffic. In proxies this is often called "Layer 4 mode" or "TCP mode"; this is when the proxy doesn't understand the higher-level protocol inside, so it can only apply a simpler subset of policy or collect a subset of telemetry.

For instance, Envoy can tell you how many bytes went over the SSH stream, but it can't tell you anything about whether those bytes indicated a successful SSH session or not.  But since Envoy can understand HTTP, it can say "90% of HTTP requests are successful and get a 200 OK response".

Here's an analogy - I speak English but not Italian; however, I can read and write the Latin alphabet that covers both.  So I could copy an Italian message from one piece of paper to another without understanding what's inside. Suppose I was your proxy and you said, "Andrew, copy all mail correspondence into email for me" - I could do that whether you received letters from English-speaking friends or Italian-speaking ones.  Now suppose you say, "Copy all mail correspondence into email unless it has Game of Thrones spoilers in it."  I can detect spoilers in English correspondence because I actually understand what's being said, but not Italian, where I can only copy the letters from one page to the other.

If I were a proxy, I'm a layer 7 English proxy but I only support Italian in layer 4 mode.

In Aspen Mesh and Istio, the protocol for a stream is configured in the Kubernetes service.  These are the options:

  • Specify a layer 7 protocol: Start the name of the service with a layer 7 protocol that Istio and Envoy understand, for example "http-foo" or "grpc-bar".
  • Specify a layer 4 protocol: Start the name of the service with a layer 4 protocol, for example "tcp-foo".  (You also use this if you know the layer 7 protocol but it's not one that Istio and Envoy support; for example, you might name a port "tcp-ssh")
  • Don't specify protocol at all: Name it without a protocol prefix, e.g. "clients".

If you don't specify a protocol at all, then Istio has to make a choice.  Before protocol sniffing was a feature, Istio chose to treat this with layer 4 mode.  Protocol sniffing is a new behavior that says, "try reading some of it - if it looks like a protocol you know, treat it like that protocol".

An important note here is that this sniffing applies for both passive monitoring and active management.  Istio both collects metrics and applies routing and policy. This is important because if a passive system has a sniff failure, it results only in a degradation of monitoring - details for a request may be unavailable.  But if an active system has a sniff failure, it may misapply routing or policy; it could send a request to the wrong service.

Benefits of Protocol Sniffing

The biggest benefit of protocol sniffing is that you don't have to specify the protocols.  Any communication stream can be sniffed without human intervention. If it happens to be HTTP, you can get detailed metrics on it.

That removes a significant amount of configuration burden and reduces time-to-value for your first service mesh install.  Drop it in and instantly get HTTP metrics.

Protocol Sniffing Failure Modes

However, as with most things, there is a tradeoff.  In some cases, protocol sniffing can produce results that might surprise you.  This happens when the sniffer classifies a stream differently than you or some other system would.

False Positive Match

This occurs when a protocol happens to look like HTTP, but the administrator doesn't want it to be treated by the proxy as HTTP.

One way this can happen is if the apps are speaking some custom protocol where the beginning of the communication stream looks like HTTP, but it later diverges.  Once it diverges and is no longer conforming to HTTP, the proxy has already begun treating it as HTTP and now must terminate the connection. This is one of the differences between passive sniffers and active sniffers - a passive sniffer could simply "cancel" sniffing.

Behavior Change:

  • Without sniffing: Stream is considered Layer 4 and completes fine.
  • With sniffing: Stream is considered Layer 7, and then when it later diverges, the proxy closes the stream.

False Negative Match

This occurs when the client and server think they are speaking HTTP, but the sniffer decides it isn't HTTP.  In our case, that means the sniffer downgrades to Layer 4 mode. The proxy no longer applies Layer 7 policy (like Istio's HTTP Authorization) or collects Layer 7 telemetry (like request success/failure counts).

One case where this occurs is when the client and server are both technically violating a specification but in a way that they both understand.  A classic example in the HTTP space is line termination - technically, lines in HTTP must be terminated with a CRLF; two characters 0x0d 0x0a.  But most proxies and web servers will also accept HTTP where lines are only terminated with LF (just the 0x0a character), because some ancient clients and hacked-together UNIX tools just sent LFs.

That example is usually harmless but a riskier one is if a client can speak something that looks like HTTP, that the server will treat as HTTP, but the sniffer will downgrade.  This allows the client to bypass any Layer 7 policies the proxy would enforce. Istio currently applies sniffing to outbound traffic where the outbound target is unknown (often occurs for Egress traffic) or the outbound target is a service port without a protocol annotation.

Here's an example: I know of two non-standard behaviors that node.js' HTTP server framework allows.  The first is allowing extra spaces between the Request-URI and the HTTP-Version in the Request-Line. The second is allowing spaces in a Header field-name.  Here's an example with the weird parts highlighted:

If I send this to a node.js server, it accepts it as a valid HTTP request (for the curious, the extra whitespace in the request line is dropped, and the whitespace in the Header field-name is included so the header is named "x-foo   bar"). Node.js' HTTP parser is taken from nginx which also accepts the extra spaces. Nginx is pretty darn popular so other web frameworks and a lot of servers accept this. Interestingly, so does the HTTP parser in Envoy (but not the HTTP inspector).

Suppose I have a situation like this:  We just added a new capability to delete in-progress orders to the beta version of our service, so we want all DELETE requests to be routed to "foo-beta" and all other normal requests routed to "foo".  We might write an Istio VirtualService to route DELETE requests like this:

If I send a request like this, it is properly routed to foo-2.

But if I send one like this, I bypass the route and go to foo-1.  Oops!

This means that clients can choose to "step around" routing if they can find requests that trick the sniffer. If those requests aren't accepted by the server at the other end, it should be OK.  However, if they are accepted by the server, bad things can happen. Additionally, you won't be able to audit or detect this case because you won't have Jaeger traces or access logs from the proxy since it thought the request wasn't HTTP.

(We investigated this particular case and ran our results past the Envoy and Istio security vulnerability teams before publishing this blog. While it didn't rise to the level of security issue, we want it to be obvious to our users what the tradeoffs are. While the benefits of protocol sniffing may be worthwhile in many cases, most users will want to avoid protocol sniffing in security-sensitive applications.)

Behavior Change:

  • Without sniffing: Stream is Layer 7 and invalid requests are consistently rejected.
  • With sniffing: Some streams may be classified as Layer 4 and bypass Layer 7 routing or policy.

Recommendation

Protocol sniffing lessens the configuration burden to get started with Istio, but creates uncertainty about behaviors.  Because this uncertainty can be controlled by the client, it can be surprising or potentially hazardous. In production, I'd prefer to tell the proxy everything I know and have the proxy reject everything that doesn't look as expected.  Personally, I like my test environments to look like my prod environments ("Test Like You Fly") so I'm going to also avoid sniffing in test.

I would use protocol sniffing when I first dropped a service mesh into an evaluation scenario, when I'm at the stage of, "Let's kick the tires and see what this thing can tell me about my environment."

For this reason, Aspen Mesh recommends users don't rely on protocol sniffing in production.  All service ports should be declared with a name that specifies the protocol (things like "http-app" or "tcp-custom").  Our users will continue to receive "vet" warnings for service ports that don't comply, so they can be confident that their clusters will behave predictably.


Aspen Mesh 1.3

Announcing Aspen Mesh 1.3

We’re excited to announce the release of Aspen Mesh 1.3 which is based on Istio’s latest LTS release 1.3 (specific tag version 1.3.3). This release builds on our self-managed release (1.2 series), includes all the new capabilities added by the Istio community in release 1.3 plus a host of new Aspen Mesh features, all fully tested with production grade support ready for enterprise adoption.

The theme for Aspen Mesh and Istio 1.3 release was enhanced User Experience. The release includes an enhanced user dashboard that has been redesigned for easier navigation of service graph and cluster resources. The Aspen Mesh service graph view has been augmented to include ingress and egress services as well as easier access to health and policy details for nodes on the graph. While a service graph is a great tool for visualizing service communication as a team, we realized that in order  to quickly identify services that are experiencing problems, individual platform engineers need a view that allows them to dig deeper and gain additional insight into their services. To address this, we are releasing a new table view which provides access to additional information about clusters, namespaces and workloads including the ingress and egress services they are communicating with and any warnings or errors for those objects as detected by our open source configuration analyzer Istio Vet.

Aspen Mesh 1.3

The Istio community added new capabilities which makes it easy for users to adopt and debug Istio and also reduced the configuration needed for users to get service mesh working in their Kubernetes environment. The full list of features and enhancements can be found in Istio’s release announcement, but there are few features that deserve deeper analysis.

Specifying Container Ports Is No Longer Required

Before release 1.3, Istio only intercepted inbound traffic on ports that were explicitly declared as part of the container spec in Kubernetes. This was often a cause of friction for adoption as Kubernetes doesn’t require container ports to be specified and by default forwards traffic to any unlisted port. Making this even worse, any unlisted inbound port bypassed the sidecar proxy (instead of being blocked) which created a potential security risk as bypassing the proxy meant no policies were being enforced. In this release, specifying container ports is no longer required and by default all ports are intercepted for traffic and redirected to sidecar proxy which means misconfiguration will no longer lead to security violations! If for some reason, you would still like to explicitly specify inbound ports instead of capturing all which we highly recommend) you can use the annotation “traffic.sidecar.istio.io/includeInboundPorts” on the pod spec.

Protocol Detection

In earlier versions of Istio, all service port names were required to be explicitly named with the protocol prefix (http-, grpc-, tcp-, etc) to declare the protocol being used by the service port. In the absence of a prefix, traffic was classified as TCP which meant a loss in visibility (metrics/tracing). It was also possible to bypass policy if a user  had configured HTTP or Layer 7 policies thinking that the application was accepting Layer 7 but the mesh was classifying it as TCP traffic. Experienced users of Kubernetes who already had a lot of existing configuration had to migrate their service definitions to add this prefix which lead to a lot of missing configurations and adoption burden. In release 1.3, an experimental protocol detection feature was added which doesn’t require users to prefix the service port name for HTTP traffic. Note that this feature is experimental and only works for HTTP traffic - for all other protocols you still need to add the prefix on the port names. Protocol detection is a useful functionality which can reduce configuration burden for users but it can interact with policies and routing in unexpected ways. We are working with the Istio community to iron out these interactions and will be publishing a blog soon on best recommended practices for production usage. In the meantime, this feature is disabled by default in the Aspen Mesh release and we  encourage our customers to enable this only in staging environments. Additionally, for Aspen Mesh customers, we automatically run the service port prefix vetter and notify you if any service in the mesh has ports with missing protocol prefixes.

Mixer-less Telemetry

Earlier versions of Istio had a control plane component, Mixer, which was responsible for receiving attributes about traffic from sidecar proxies in the client and server workloads and exposing it to a telemetry backend system like Prometheus or DataDog. This architecture was great for providing an abstraction layer for operators to switch out telemetry backend systems, but this component often became a choke point which required a large amount of resources (CPU/memory) which made it expensive for operators to manage Istio. In this release, an experimental feature was added which doesn’t require running Mixer to capture telemetry. In this mode, the sidecar proxies expose the metrics directly, which can be scraped by Prometheus. This feature is disabled by default and under active development to make sure users get the same metrics with and without Mixer. This page documents how to enable and use this feature  if you’re interested in trying it out.

Telemetry for External Services

Depending on your global settings i.e. whether to allow any external service access or block all traffic without explicit ServiceEntries, there were gaps in telemetry when external traffic was either blocked or allowed. Having visibility into external services is one of the key benefits of a service mesh and the new functionality added in release 1.3 allows you to monitor all external service traffic in either of the modes. It was a highly requested feature both from our customers and other production users of Istio, and we were pleased to  contribute this functionality to open source Istio. This blog documents how the augmented metrics can be used to better understand external service access. Note that all Aspen Mesh releases by default block all external service access, which we recommend, unless explicitly declared via ServiceEntries.

We hope that these new features simplify configuration needed to adopt Aspen Mesh and the enhanced User Experience makes it easy for you to navigate the complexities of a microservices environment. You can get the latest release here or if you’re an existing customer please follow the upgrade instructions in our documentation to switch to this version.