Why Is Policy Hard?

Aspen Mesh spends a lot of time talking to users about policy, even if we don’t always start out calling it that. A common pattern we see with clients is:

  1. Concept: "Maybe I want this service mesh thing"
  2. Install: "Ok, I've got Aspen Mesh installed, now what?"
  3. Observe: "Ahhh! Now I see how my microservices are communicating.  Hmmm, what's that? That pod shouldn't be talking to that database!"
  4. Act: "Hey mesh, make sure that pod never talks to that database"

The Act phase is interesting, and there’s more to it than might be obvious at first glance. I'll propose that in this blog, we work through some thought experiments to delve into how service mesh can help you act on insights from the mesh.

First, put yourself in the shoes of the developer that just found out their test pod is accidentally talking to the staging database. (Ok, you're working from home today so you don't have to put on shoes; the cat likes sleeping on your shoeless feet better anyways.) You want to control the behavior of a narrow set of software for which you're the expert; you have local scope and focus.

Next, put on the shoes of a person responsible for operating many applications; people we talk to often have titles that include Platform, SRE, Ops, Infra. Each day they’re diving into different applications so being able to rapidly understand applications is key. A consistent way of mapping across applications, datacenters, clouds, etc. is critical. Your goal is to reduce "snowflake architecture" in favor of familiarity to make it easier when you do have to context switch.

Now let's change into the shoes of your org's Compliance Officer. You’re on the line for documenting and proving that your platform is continually meeting compliance standards. You don't want to be the head of the “Department of No”, but what’s most important to you is staying out of the headlines. A great day at work for you is when you've got clarity on what's going on across lots of apps, databases, external partners, every source of data your org touches AND you can make educated tradeoffs to help the business move fast with the right risk profile. You know it’s ridiculous to be involved in every app change, so you need separation-of-concerns.

I'd argue that all of these people have policy concerns. They want to be able to specify their goals at a suitably high level and leave the rote and repetitive portions to an automated system.  The challenging part is there's only one underlying system ("the kubernetes cluster") that has to respond to each of these disparate personas.

So, to me policy is about transforming a bunch of high-level behavioral prescriptions into much lower-level versions through progressive stages. Useful real-world policy systems do this in a way that is transparent and understandable to all users, and minimizes the time humans spend coordinating. Here's an example "day-in-the-life" of a policy:

At the top is the highest level goal: "Devs should test new code without fear". Computers are hopeless to implement this. At the bottom is a rule suitable for a computer like a firewall to implement.

The layers in the middle are where a bad policy framework can really hurt. Some personas (the hypothetical Devs) want to instantly jump to the bottom - they're the "4.3.2.1" in the above example. Other personas (the hypothetical Compliance Officer) is way up top, going down a few layers but not getting to the bottom on a day-to-day basis.

I think the best policy frameworks help each persona:

  • Quickly find the details for the layer they care about right now.
  • Help them understand where did this come from? (connect to higher layers)
  • Help them understand is this doing what I want? (trace to lower layers)
  • Know where do I go to change this? (edit/create policy)

As an example, let's look at iptables, one of the firewalling/packet mangling frameworks for Linux.  This is at that bottom layer in my example stack - very low-level packet processing that I might look at if I'm an app developer and my app's traffic isn't doing what I'd expect.  Here's an example dump:


root@kafka-0:/# iptables -n -L -v --line-numbers -t nat
Chain PREROUTING (policy ACCEPT 594K packets, 36M bytes)
num   pkts bytes target     prot opt in out   source destination
1     594K 36M ISTIO_INBOUND  tcp -- * * 0.0.0.0/0            0.0.0.0/0

Chain INPUT (policy ACCEPT 594K packets, 36M bytes)
num   pkts bytes target     prot opt in out   source destination

Chain OUTPUT (policy ACCEPT 125K packets, 7724K bytes)
num   pkts bytes target     prot opt in out   source destination
1      12M 715M ISTIO_OUTPUT  tcp -- * * 0.0.0.0/0            0.0.0.0/0

Chain POSTROUTING (policy ACCEPT 12M packets, 715M bytes)
num   pkts bytes target     prot opt in out   source destination

Chain ISTIO_INBOUND (1 references)
num   pkts bytes target     prot opt in out   source destination
1        0 0 RETURN     tcp -- * *     0.0.0.0/0 0.0.0.0/0            tcp dpt:22
2     594K 36M RETURN     tcp -- * *   0.0.0.0/0 0.0.0.0/0            tcp dpt:15020
3        2 120 ISTIO_IN_REDIRECT  tcp -- * * 0.0.0.0/0            0.0.0.0/0

Chain ISTIO_IN_REDIRECT (1 references)
num   pkts bytes target     prot opt in out   source destination
1        2 120 REDIRECT   tcp -- * *     0.0.0.0/0 0.0.0.0/0            redir ports 15006

Chain ISTIO_OUTPUT (1 references)
num   pkts bytes target     prot opt in out   source destination
1      12M 708M ISTIO_REDIRECT  all -- * lo 0.0.0.0/0           !127.0.0.1
2        7 420 RETURN     all -- * *     0.0.0.0/0 0.0.0.0/0            owner UID match 1337
3        0 0 RETURN     all -- * *     0.0.0.0/0 0.0.0.0/0            owner GID match 1337
4     119K 7122K RETURN     all -- * *   0.0.0.0/0 127.0.0.1
5        4 240 ISTIO_REDIRECT  all -- * * 0.0.0.0/0            0.0.0.0/0

Chain ISTIO_REDIRECT (2 references)
num   pkts bytes target     prot opt in out   source destination
1      12M 708M REDIRECT   tcp -- * *   0.0.0.0/0 0.0.0.0/0            redir ports 15001


This allows me to quickly understand a lot of details about what is happening at this layer. Each rule specification is on the right-hand side and is relatively intelligible to the personas that operate at this layer. On the left, I get "pkts" and "bytes" - this is a count of how many packets have triggered each rule, helping me answer "Is this doing what I want it to?". There's even more information here if I'm really struggling: I can log the individual packets that are triggering a rule, or mark them in a way that I can capture them with tcpdump.  

Finally, furthest on the left in the "num" column is a line number, which is necessary if I want to modify or delete rules or add new ones before/after; this is a little bit of help for "Where do I go to change this?". I say a little bit because in most systems that I'm familiar with, including the one I grabbed that dump from, iptables rules are produced by some program or higher-level system; they're not written by a human. So if I just added a rule, it would only apply until that higher-level system intervened and changed the rules (in my case, until a new Pod was created, which can happen at any time). I need help navigating up a few layers to find the right place to effect the change.

iptables lets you organize groups of rules into your own chains, in this case the name of the chain (ISTIO_***) is a hint that Istio produced this and so I've got a hint on what higher layer to examine.

For a much different example, how about the Kubernetes CI Robot (from Prow)? If you've ever made a PR to Kubernetes or many other CNCF projects, you likely interacted with this robot. It's an implementer of policy; in this case the policies around changing source code for Kubernetes.  One of the policies it manages is compliance with the Contributor's License Agreement; contributors agree to grant some Intellectual Property rights surrounding their contributions. If k8s-ci-robot can't confirm that everything is alright, it will add a comment to your PR:

This is much different than firewall policy, but I say it's still policy and I think the same principles apply. Let's explore. If you had to diagram the policy around this, it would start at the top with the legal principle that Kubernetes wants to make sure all the software under its umbrella has free and clear IP terms. Stepping down a layer, the Kubernetes project decided to satisfy that requirement by requiring a CLA for any contributions. So on until we get to the bottom layer, the code that implements the CLA check.

As an aside, the code that implements the CLA check is actually split into two halves: first there's a CI job that actually checks the commits in the PR against a database of signed CLAs, and then there's code that takes the result of that job and posts helpful information for users to resolve any issues. That's not visible or important at that top layer of abstraction (the CNCF lawyers shouldn't care).

This policy structure is easy to navigate. If your CLA check fails, the comment from the robot has great links. If you're an individual contributor you can likely skip up a layer, sign the CLA and move on. If you're contributing on behalf of a company, the links will take you to the document you need to send to your company's lawyers so they can sign on behalf of the company.

So those are two examples of policy. You probably encounter many other ones every day from corporate travel policy to policies (written, unwritten or communicated via email missives) around dirty dishes.

It's easy to focus on the technical capabilities of the lowest levels of your system. But I'd recommend that you don't lose focus on the operability of your system. It’s important that it be transparent and easy to understand. Both the iptables and k8s-ci-robot are transparent. The k8s-ci-robot has an additional feature: it knows you're probably wondering "Where did this come from?" and it answers that question for you. This helps you and your organization navigate the layers of policy. 

When implementing service mesh to add observability, resilience and security to your Kubernetes clusters, it’s important to consider how to set up policy in a way that can be navigated by your entire team. With that end in mind, Aspen Mesh is building a policy framework for Istio that makes it easy to implement policy and understand how it will affect application behavior.

Did you like this blog? Subscribe to get email updates when new Aspen Mesh blogs go live.


Recommended Remediation for Kubernetes CVE-2019-11247

A Kubernetes vulnerability CVE-2019-11247 was announced this week.  While this is a vulnerability in Kubernetes itself, and not Istio, it may affect Aspen Mesh users. This blog will help you understand possible consequences and how to remediate.

  • You should mitigate this vulnerability by updating to Kubernetes 1.13.9, 1.14.5 or 1.15.2 as soon as possible.
  • This vulnerability affects the interaction of Roles (a Kubernetes RBAC resource) and CustomResources (Istio uses CustomResources for things like VirtualServices, Gateways, DestinationRules and more).
  • Only certain definitions of Roles are vulnerable.
  • Aspen Mesh's installation does not define any such Roles, but you might have defined them for other software in your cluster.
  • More explanation and recovery details below.

Explanation

If you have a Role defined anywhere in your cluster with a "*" for resources or apiGroups, then anything that can use that Role could escalate to modify many CustomResources.

This Kubernetes issue and this helpful blog have extensive details and an example walkthrough that we'll summarize here.  Kubernetes Roles define sets of permissions for resources in one particular namespace (like "default").  They are not supposed to define permissions for resources in other namespaces or resources that live globally (outside of any namespace); that's what ClusterRoles are for.

Here's an example Role from Aspen Mesh that says "The IngressGateway should be able to get, watch or list any secrets in the istio-system namespace", which it needs to bootstrap secret discovery to get keys for TLS or mTLS:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: istio-ingressgateway-sds
  namespace: istio-system
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "watch", "list"]

This Role does not grant permissions to secrets in any other namespace, or grant permissions to anything aside from secrets.   You'd need additional Roles and RoleBindings to add those. It doesn't grant permissions to get or modify cluster-wide resources.  You'd need ClusterRoles and ClusterRoleBindings to add those.

The vulnerability is that if the Role grants access to CustomResources in one namespace, it accidentally grants access to the same kinds of CustomResources that exist at global scope.  This is not exploitable in many cases because if you have a namespace-scoped CustomResource called "Foo", you can't also have a global CustomResource called "Foo", so there are no global-scope Foos to attack.  Unfortunately, if your role allows access to a resource "*" or apiGroup "*", then the "*" matches both namespace-scoped Foo and globally-scoped Bar in vulnerable versions of Kubernetes. This Role could be used to attack Bar.

If you're really scrutinizing the above example, note that the apiGroup "" is different than the apiGroup "*": the empty "" refers to core Kubernetes resources like Secrets, while "*" is a wildcard meaning any.

Aspen Mesh and Istio define three globally-scoped CustomResources: ClusterRbacConfig, MeshPolicy, ClusterIssuer.  If you had a vulnerable Role defined and an attacker could assume that Role, then they could have modified those three CustomResources.  Aspen Mesh does not provide any vulnerable Roles so a cluster would need to have those Roles defined for some other purpose.

Recovery

First and as soon as possible, you should upgrade Kubernetes to a non-vulnerable version: 1.13.9, 1.14.5 or 1.15.2.

When you define Roles and ClusterRoles, you should follow the Principle of Least Privilege and avoid granting access to "*" - always prefer listing the specific resources required.

You can examine your cluster for vulnerable Roles.  If any exist, or have existed, those could have been exploited by an attacker with knowledge of this vulnerability to modify Aspen Mesh configuration.  To mitigate this, recreate any CustomResources after upgrading to a non-vulnerable version of Kubernetes.

This snippet will print out any vulnerable Roles currently configured, but cannot tell you if any may have existed in the past.  It relies on the jq tool:

kubectl get role --all-namespaces -o json |jq '.items[] | select(.rules[].resources | index("*"))'
kubectl get role --all-namespaces -o json |jq '.items[] | select(.rules[].apiGroups | index("*"))'

This is not a vulnerability in Aspen Mesh, so there is no need to upgrade Aspen Mesh.


Securing Containerized Applications With Service Mesh

The self-contained, ephemeral nature of microservices comes with some serious upside, but keeping track of every single one is a challenge, especially when trying to figure out how the rest are affected when a single microservice goes down. The end result is that if you’re operating or developing a microservices architecture, there’s a good chance part of your days are spent wondering what your services are up to.

With the adoption of microservices, problems also emerge due to the sheer number of services that exist in large systems. Problems like security, load balancing, monitoring and rate limiting that had to be solved once for a monolith, now have to be handled separately for each service.

The technology aimed at addressing these microservice challenges has been  rapidly evolving:

  1. Containers facilitate the shift from monolith to microservices by enabling independence between applications and infrastructure.
  2. Container orchestration tools solve microservices build and deploy issues, but leave many unsolved runtime challenges.
  3. Service mesh addresses runtime issues including service discovery, load balancing, routing and observability.

Securing Services with a Service Mesh

A service mesh provides an advanced toolbox that lets users add security, stability and resiliency to containerized applications. One of the more common applications of a service mesh is bolstering cluster security. There are 3 distinct capabilities provided by the mesh that enable platform owners to create a more secure architecture.

Traffic Encryption  

As a platform operator, I need to provide encryption between services in the mesh. I want to leverage mTLS to encrypt traffic between services. I want the mesh to automatically encrypt and decrypt requests and responses, so I can remove that burden from my application developers. I also want it to improve performance by prioritizing the reuse of existing connections, reducing the need for the computationally expensive creation of new ones. I also want to be able to understand and enforce how services are communicating and prove it cryptographically.

Security at the Edge

As a platform operator, I want Aspen Mesh to add a layer of security at the perimeter of my clusters so I can monitor and address compromising traffic as it enters the mesh. I can use the built in power of Kubernetes as an ingress controller to add security with ingress rules such as whitelisting and blacklisting. I can also apply service mesh route rules to manage compromising traffic at the edge. I also want control over egress so I can dictate that our network traffic does not go places it shouldn't (blacklist by default and only talk to what you whitelist).

Role Based Access Control (RBAC)

As the platform operator, It’s important that I am able to provide the level of least privilege so the developers on my platform only have access to what they need, and nothing more. I want to enable controls so app developers can write policy for their apps and only their apps so that they can move quickly without impacting other teams. I want to use the same RBAC framework that I am familiar with to provide fine-grained RBAC within my service mesh.

How a Service Mesh Adds Security

You’re probably thinking to yourself, traffic encryption and fine-grained RBAC sound great, but how does a service mesh actually get me to them? Service meshes that leverage a sidecar approach are uniquely positioned intercept and encrypt data. A sidecar proxy is a prime insertion point to ensure that every service in a cluster is secured, and being monitored in real-time. Let’s explore some details around why sidecars are a great place for security.

Sidecar Is a Great Place for Security

Securing applications and infrastructure has always been daunting, in part because the adage really is true: you are only as secure as your weakest link.  Microservices are an opportunity to improve your security posture but can also cut the other way, presenting challenges around consistency.  For example, the best organizations use the principle of least privilege: an app should only have the minimum amount of permissions and privilege it needs to get its job done.  That's easier to apply where a small, single-purpose microservice has clear and narrowly-scoped API contracts.  But there's a risk that as application count increases (lots of smaller apps), this principle can be unevenly applied. Microservices, when managed properly, increase feature velocity and enable security teams to fulfill their charter without becoming the Department of No.

There's tension: Move fast, but don't let security coverage slip through the cracks.  Prefer many smaller things to one big monolith, but secure each and every one.  Let each team pick the language of their choice, but protect them with a consistent security policy.  Encourage app teams to debug, observe and maintain their own apps but encrypt all service-to-service communication.

A sidecar is a great way to balance these tensions with an architecturally sound security posture.  Sidecar-based service meshes like Istio and Linkerd 2.0 put their datapath functionality into a separate container and then situate that container as close to the application they are protecting as possible.  In Kubernetes, the sidecar container and the application container live in the same Kubernetes Pod, so the communication path between sidecar and app is protected inside the pod's network namespace; by default it isn't visible to the host or other network namespaces on the system.  The app, the sidecar and the operating system kernel are involved in communication over this path.  Compared to putting the security functionality in a library, using a sidecar adds the surface area of kernel loopback networking inside of a namespace, instead of just kernel memory management.  This is additional surface area, but not much.

The major drawbacks of library approaches are consistency and sprawl in polyglot environments.  If you have a few different languages or application frameworks and take the library approach, you have to secure each one.  This is not impossible, but it's a lot of work.  For each different language or framework, you get or choose a TLS implementation (perhaps choosing between OpenSSL and BoringSSL).  You need a configuration layer to load certificates and keys from somewhere and safely pass them down to the TLS implementation.  You need to reload these certs and rotate them.  You need to evaluate "information leakage" paths: does your config parser log errors in plaintext (so it by default might print the TLS key to the logs)?  Is it OK for app core dumps to contain these keys?  How often does your organization require re-keying on a connection?  By bytes or time or both?  Minimum cipher strength?  When a CVE in OpenSSL comes out, what apps are using that version and need updating?  Who on each app team is responsible for updating OpenSSL, and how quickly can they do it?  How many apps have a certificate chain built into them for consuming public websites even if they are internal-only?  How many Dockerfiles will you need to update the next time a public signing authority has to revoke one?  slowloris?

Your organization can do all this work.  In fact, parts probably already have - above is our list of painful app security experiences but you probably have your own additions.  It is a lot of cross-organizational effort and process to get it right.  And you have to get it right everywhere, or your weakest link will be exploited.  Now with microservices, you have even more places to get it right.  Instead, our advice is to focus on getting it right once in the sidecar, and then distributing the sidecar everywhere, and get back to adding business value instead of duplicating effort.

There are some interesting developments on the horizon like the use of kernel TLS to defer bulk and some asymmetric crypto operations to the kernel.  That's great:  Implementations should change and evolve.  The first step is providing a good abstraction so that apps can delegate to lower layers. Once that's solid, it's straightforward to move functionality from one layer to the next as needed by use case, because you don't perturb the app any more.  As precedent, consider TCP Segmentation Offload, which lets the network card manage splitting app data into the correct size for each individual packet.  This task isn't impossible for an app to do, but it turns out to be wasted effort.  By deferring TCP segmentation to the kernel, it left the realm of the app.  Then, kernels, network drivers, and network cards were free to focus on the interoperability and semantics required to perform TCP segmentation at the right place.  That's our position for this higher-level service-to-service communication security: move it outside of the app to the sidecar, and then let sidecars, platforms, kernels and networking hardware iterate.

Envoy Is a Great Sidecar

We use Envoy as our sidecar because it's lightweight, has some great features and good API-based configurability.  Here are some of our favorite parts about Envoy:

  • Configurable TLS Parameters: Envoy exposes all the TLS configuration points you'd expect (cipher strength, protocol versions, curves).  The advantage to using Envoy is that they're configured the same way for every app using the sidecar.
  • Mutual TLS: Typically TLS is used to authenticate the server to the client, and to encrypt communication.  What's missing is authenticating the client to the server - if you do this, then the server knows what is talking to it.  Envoy supports this bi-directional authentication out of the box, which can easily be incorporated into a SPIFFE system.  In today's complex and cloud datacenter, you're better off if you trust things based on cryptographic proof of what they are, instead of network perimeter protection of where they called from.
  • BoringSSL: This fork of OpenSSL removed huge amounts of code like implementations of obsolete ciphers and cleaned up lots of vestigial implementation details that had repeatedly been the source of security vulnerabilities.  It's a good default choice if you don't need any OpenSSL-specific functionality because it's easier to get right.
  • Security Audit: A security audit can't prove the absence of vulnerabilities but it can catch mistakes that demonstrate either architectural weaknesses or implementation sloppiness.  Envoy's security audit did find issues but in our opinion indicated a high level of security health.
  • Fuzzed and Bountied: Envoy is continuously fuzzed (exposed to malformed input to see if it crashes) and covered by Google's Patch Reward security bug bounty program.
  • Good API Granularity: API-based configuration doesn't mean "just serialize/deserialize your internal state and go."  Careful APIs thoughtfully map to the "personas" of what's operating them (even if those personas are other programs).  Envoy's xDS APIs in our experience partition routing behavior from cluster membership from secrets.  This makes it easy to make well-partitioned controllers.  A knock-on benefit is that it is easy in our experience to debug and test Envoy because config constructs usually map pretty clearly to code constructs.
  • No garbage collector: There are great languages with automatic memory management like Go that we use every day.  But we find languages like C++ and Rust provide predictable and optimizable tail latency.
  • Native Extensibility via Filters: Envoy has layer 4 and layer 7 extension points via filters that are written in C++ and linked into Envoy.
  • Scripting Extensibility via Lua: You can write Lua scripts as extension points as well.  This is very convenient for rapid prototyping and debugging.

One of these benefits deserves an even deeper dive in a security-oriented discussion.  The API granularity of Envoy is based on a scheme called "xDS" which we think of as follows:  Logically split the Envoy config API based on the user of that API.  The user in this case is almost always some other program (not a human), for instance a Service Mesh control plane element.

For instance, in xDS listeners ("How should I get requests from users?") are separated from clusters ("What pods or servers are available to handle requests to the shoppingcart service?").  The "x" in "xDS" is replaced with whatever functionality is implemented ("LDS" for listener discovery service).  Our favorite security-related partitioning is that the Secret Discovery Service can be used for propagating secrets to the sidecars independent of the other xDS APIs.

Because SDS is separate, the control plane can implement the Principle of Least Privilege: nothing outside of SDS needs to handle or have access to any private key material.

Mutual TLS is a great enhancement to your security posture in a microservices environment.  We see mutual TLS adoption as gradual - almost any real-world app will have some containerized microservices ready to join the service mesh and mTLS on day one.  But practically speaking, many of these will depend on mesh-external services, containerized or not.  It is possible in most cases to integrate these services into the same trust domain as the service mesh, and oftentimes these components can even participate in client TLS authentication so you get true mutual TLS.

In our experience, this happens by gradually expanding the "circle" of things protected with mutual TLS.  First, stateless containerized business logic, next in-cluster third party services, finally external state stores like bare metal databases.  That's why we focus on making the state of mTLS easy to understand in Aspen Mesh, and provide assistants to help you detect configuration mishaps.

What Lives Outside the Sidecar?

You need a control plane to configure all of these sidecars.  In some simple cases it may be tempting to do this with some CI integration to generate configs plus DNS-based discovery.  This is viable but it's hard to do rapid certificate rotation.  Also, it leaves out more dynamic techniques like canaries, progressive delivery and A/B testing.  For this reason, we think most real-world applications will include an online control plane that should:

  • Disseminate configuration to each of the sidecars with a scalable approach.
  • Rotate sidecar certificates rapidly to reduce the value to an attacker of a one-time exploit of an application.
  • Collect metadata on what is communicating with what.

A good security posture means you should be automating some work on top of the control plane. We think these things are important (and built them into Aspen Mesh):

  • Organizing information to help humans narrow in on problems quickly.
  • Warning on potential misconfigurations.
  • Alerting when unhealthy communication is observed.
  • Inspect the firehose of metadata for surprises - these patterns could be application bugs or security issues or both.

If you’re considering or going down the Kubernetes path, you should be thinking about the unique security challenges that comes with microservices running in a Kubernetes cluster. Kubernetes solves many of these, but there are some critical runtime issues that a service mesh can make easier and more secure. If you would like to talk about how the Aspen Mesh platform and team can address your specific security challenge, feel free to find some time to chat with us.  Or to learn more, get the free white paper on achieving Zero-trust security for containerized applications here.


Announcing Aspen Mesh 1.1

Aspen Mesh release 1.1.3 is now out to address a critical security update. The Aspen Mesh release is based on security patches released in the Istio 1.1.3 release - you can read more about the update here. We recommend Aspen Mesh users running 1.1 immediately upgrade to 1.1.3.

Close on the heels of the much anticipated Istio 1.1 release, we are excited to announce the release of Aspen Mesh 1.1. Our latest release provides all the features of Istio plus the support of the Aspen Mesh platform and team, and additional features you need to operate in the enterprise.

As with previous Istio releases, the open source community has done a great job of creating a release with exciting new features, improved stability and enhanced performance. The aim of this blog is to distill what has changed in the new release and point out the few gotchas and known issues in an easy to consume format.  We often find that there are so many changes (kudos to the community for all the hard work!) that it is difficult for users to discern what pieces they should care about and what actions they need to take on the pieces they care about. Hopefully, this blog will help address a few of these issues.

Before we delve into the specifics, let’s focus on why release 1.1 was a big milestone for the Istio community and how things were handled differently compared to previous releases:

  • Quality was the major focus of this release. If you look at the history, you will notice that it took six release candidates to get the release out. The maintainers worked diligently to resolve tricky user-identified issues and address them correctly instead of getting the release out on a predefined date. We would see constant updates/PRs (even on weekends) to address these issues which is a testament to the dedication of the open source community.
  • User experience was a key area of focus in the community for this release. There was a new UX Working Group created to address various usability issues and to improve the user’s Istio journey from install to upgrade. We believe that this is a step in the right direction and will lead to easier Istio adoption. Aspen Mesh actively participates in these meetings with an eye on improving the experience of Istio and Aspen Mesh users.
  • Meaningful effort was put into improving the documentation, especially around consistent use of terminology.

It was great to see that the community listened to its users, addressed critical issues and didn’t rush to release 1.1. We look forward to how Project Mauve can further improve the engineering process, thereby improving the quality of Istio releases.

So, let’s move onto the exciting new features and improvements that are part of the Aspen Mesh 1.1 release.

Aspen Mesh 1.1 Features

Reduced sidecar memory usage
This was a long-standing issue that Istio users had faced when dealing with medium to large scale clusters. The Envoy sidecars’ memory consumption grew as new services and pods were deployed in the cluster resulting in a considerable memory footprint for each sidecar proxy. As these sidecars are part of every pod in the mesh this can quickly impact the scheduling and memory requirements for your cluster. In release 1.1, you can expect a significant reduction in the memory consumption by the sidecars. This benefit is primarily driven by reducing the set of statistics exposed by each sidecar. Previously, the sidecars were configured to expose metrics for every Envoy cluster, listener and HTTP connection manager which would increase the number of metrics reported roughly in proportion to the number of services and pods. In release 1.1, the set of metrics is now reduced to the cluster and listener managers (in addition to Istio specific stats) which always expose a fixed set of metrics. We found in our testing that the sidecar memory consumption is significantly lower compared to Aspen Mesh release 1.0.4 and we are looking forward to users being able to inject sidecars in more applications in their clusters.

New multi cluster support

Earlier versions of Istio supported multiple clusters via a single control plane topology. This meant that the Istio control plane would be deployed only on one cluster which would manage services on both local and remote clusters. Additionally, it required a flat network IP space for the pods to communicate across clusters. These restrictions limited real world uses of multi cluster functionality as the control plane could easily become a single point of failure and the flat IP space was not always feasible. In this release support was added for multiple control plane topology which provides the desired control plane availability and no restrictions on the IP layout. Networking across clusters is set up via the ingress gateways which rely on mTLS (common root Certificate Authority across clusters) to verify the peer traffic. We are excited to see new use cases emerge for multi cluster service mesh and how enterprises can leverage Aspen Mesh to truly build resilient and highly available applications deployed across clusters.   

CNI support
Istio by default sets up the pod traffic redirection to/from the sidecar proxy by injecting an init container which uses iptables under the hood. The ability to use iptables requires elevated permissions which is a hindrance to adopting Istio in various organizations due to compliance concerns. Istio and Aspen Mesh now support CNI as a new way to perform traffic redirection, removing the need for elevated permissions. It is great to see this enhancement as we think it is critical to have the principle of least privileges applied to the service mesh. We’re excited to be able to drive advanced compliance use cases with our customers over the next few months.

New sidecar resource
One of the biggest challenges users faced with the old releases was that all the sidecars in the mesh had configuration related to all the services in the cluster even though a particular sidecar proxy only needed to talk to a small subset of services. This resulted in excess churn as massive amounts of configuration were processed and transmitted to the sidecars with every configuration update. This caused intermittent request failures and CPU spikes in all the sidecars on any configuration change in the cluster. The 1.1 release added a new Sidecar resource to enable operators to configure the ingress and egress of each proxy. With this resource, users can control the scope and visibility of configuration distributed to the sidecars and attain better resource utilization and scalability of Istio components.

Apart from the aforementioned major changes, there are quite a few lesser known enhancements in this release which can be helpful in exploring Aspen Mesh capabilities.

Enabling end-user JWT authentication by path
Istio ingressgateway and sidecar proxies support decoding JWT provided by the end user and passing it to the applications as an HTTP request header. This has the operational benefit of isolating authentication from application code and instead using the service mesh infrastructure layer for these critical security operations. In earlier versions of Istio you could only enable/disable this feature on a per service or port basis but not for specific HTTP paths. This was very limiting especially for ingress gateways where you might have some paths requiring authentication and some that didn’t. In release 1.1, an experimental feature was added to enable end user JWT authentication based on request path.

New Helm installation options There are many new Helm installation options added in this release (in addition to the old ones) that are useful in customizing Aspen Mesh based on your needs. We often find that customer use cases are quite different and unique for every environment, so the addition of these options makes it easy to tailor service mesh to your needs. Some of the important new options are:

  • Node selector - Many of our customers want to install the control plane components on their own nodes for better monitoring, isolation and resilience. In this release there is an easy Helm option, global.defaultNodeSelector to achieve this functionality.
  • Tracing backend address - Users often have their tracing set up and want to easily add Istio on top to work with their existing tracing system. In the older version it was quite painful to provide a different tracing backend to Istio (used to be hardcoded to “zipkin.istio-system”). This release added a new “global.tracer.zipkin.address” Helm option to enable this functionality. If you’re an Aspen Mesh customer, we automatically set this up for you so that the traces are sent to the Aspen Mesh platform where you can access them via our hosted Jaeger service.
  • Customizable proxy access log format - The sidecar proxies in the older releases performed access logging in the default Envoy format. Even though the information is great, you might have access logging set up in other systems in your environment and want to have a uniform access logging format throughout your cluster for ease of parsing, searching and tooling. This new release supports a Helm option “global.proxy.accessLogFormat” for users to easily customize the logging format based on their environment.

This release also added many debugging enhancements which make it easy for users to operate and debug when running an Aspen Mesh cluster. Some critical enhancements in this area were:

Istioctl enhancements
Istioctl is a tool similar to kubectl for performing Istio specific operations which can aid in debugging and validating various Istio configuration and runtime issues. There were several enhancements made to this tool which are worth mentioning:

  • Verify install - Istioctl now supports an experimental command to verify the installation in your cluster. This is a great step for first time Istio users before you dive deeper into exploring all the Istio capabilities. If you’re an Aspen Mesh customer, our demo installer automatically does this step for you and lets you know if the installation was successful.
  • Static configuration validation - Istioctl supports a “validate” command for users to verify their configuration before applying it to their cluster. Using this effectively can prevent easy misconfigurations and surprises which can be hard to debug. Note that Galley now also performs validation and rejects configuration if it’s invalid in the new release. If you’re an Aspen Mesh customer, you can use this new functionality in addition to the automated runtime analysis we perform via istio-vet. We find that the static single resource validation is a good first step but an automated tool like istio-vet from Aspen Mesh which can perform runtime analysis across multiple resources is also needed to ensure a properly functioning mesh.
  • Proxy health status - Support was added to quickly inspect and verify the health status of proxy (default port 15020) which can be very useful in debugging request failures. We often found that users struggled in understanding what qualifies as a healthy Istio proxy (sidecar or gateways) and we think this can help to alleviate this issue.

Along with all of these great new improvements, there are a few gotchas or unexpected behaviors you might observe especially if you’re upgrading Istio from an older version. We’ve done a thorough investigation of these potential issues and are making sure our customers have a smooth transition with our releases. However, for the broader community let’s cover a few important gotchas to be aware of:

  • Access allowed to any external services by default - The new Istio release will by default allow access to any external service. In previous releases, all external traffic was blocked and required users to explicitly whitelist external services via ServiceEntry. This decision was reached by the community to make it easier for customers to add Istio on top of their existing deployments and not break working traffic. However, we think this is a major change that can lead to security escapes if you’re upgrading to this version. With that in mind, the Aspen Mesh distribution of the release will continue to block all external traffic by default. If you want to customize this setting, the Helm option “global.outboundTrafficPolicy.mode” can be updated based on your requirement.
  • Proxy access logs disabled by default - In this Istio release, the default behavior for proxy access logging has changed and it is now turned off by default. For first time users it is very helpful to observe access logs as the traffic flows through their services in the mesh. Additionally, if you’re upgrading to a new version and find that your logs are missing, it might break debugging capabilities that you have built around it. Because of this, the Aspen Mesh distribution has the proxy access logs turned on by default. You can customize this setting by updating the Helm option “global.proxy.accessLogFile” to “/dev/stdout”.
  • Every Sidecar resource requires “istio-system” - If you’re configuring the newly available Sidecar resource, be sure to include “istio-system” as one of the allowed egress hosts. During our testing we found that in the absence of “istio-system” namespace, the sidecar proxies will start experiencing failures communicating to the Istio control plane which can lead to cascading failures. We are working with the community to address this issue so that users can configure this resource with minimal surprises.
  • Mixer policy checks disabled by default -  Mixer policy checks were turned on by default in earlier Istio releases which meant that the sidecar proxies and gateways would always consult Mixer in the Istio control plane to check policy and forward the request to the application only if the policy allowed it. This feature was seldom used but added latency due to the out-of-process network call. This new release turned off policy checks by default after much deliberation and debate in the community. What this means is if you had previously configured Policy checks and were relying on Mixer to enforce it, after the upgrade those configurations will no longer have any effect. If you would like to enable them by default, set the Helm option “global.disablePolicyChecks” to false.

We hope this blog has made it easy to understand the scope and impact of the 1.1 release. At Aspen Mesh, we keep a close tab on the community and actively participate to make the adoption and upgrade path easier for our customers. We believe that enterprises should spend less time and effort on configuring the service mesh and focus on adding business value on top.

We'll be covering subsequent topics and deep diving into how you can set up and make the most out of new 1.1 features like multi cluster. Be sure to subscribe to the Aspen Mesh blog so you don't miss out.

If you want to quickly get started with the Aspen Mesh 1.1 release grab it here or if you’re an existing customer please follow our upgrade instructions mentioned in the documentation.


Expanding Service Mesh Without Envoy

Istio uses the Envoy sidecar proxy to handle traffic within the service mesh.  The following article describes how to use an external proxy, F5 BIG-IP, to integrate with an Istio service mesh without having to use Envoy for the external proxy.  This can provide a method to extend the service mesh to services where it is not possible to deploy an Envoy proxy.

This method could be used to secure a legacy database to only allow authorized connections from a legacy app that is running in Istio, but not allow any other applications to connect.

Securing Legacy Protocols

A common problem that customers face when deploying a service mesh is how to restrict access to an external service to a limited set of services in the mesh.  When all services can run on any nodes it is not possible to restrict access by IP address (“good container” comes from the same IP as “malicious container”).

One method of securing the connection is to isolate an egress gateway to a dedicated node and restrict traffic to the database from those nodes.  This is described in Istio’s documentation:

Istio cannot securely enforce that all egress traffic actually flows through the egress gateways. Istio only enables such flow through its sidecar proxies. If attackers bypass the sidecar proxy, they could directly access external services without traversing the egress gateway. Thus, the attackers escape Istio’s control and monitoring. The cluster administrator or the cloud provider must ensure that no traffic leaves the mesh bypassing the egress gateway.

   -- https://istio.io/docs/examples/advanced-gateways/egress-gateway/#additional-security-considerations (2019-03-25)

Another method would be to use mesh expansion to install Envoy onto the VM that is hosting your database. In this scenario the Envoy proxy on the database server would validate requests prior to forwarding them to the database.

The third method that we will cover will be to deploy a BIG-IP to act as an egress device that is external to the service mesh.  This is a hybrid of mesh expansion and multicluster mesh.

Mesh Expansion Without Envoy

Under the covers Envoy is using mutual TLS to secure communication between proxies.  To participate in the mesh, the proxy must use certificates that are trusted by Istio; this is how VM mesh expansion and multicluster service mesh are configured with Envoy.  To use an alternate proxy we need to have the ability to use certificates that are trusted by Istio.

Example of Extending Without Envoy

A proof-of-concept of extending the mesh can be taken with the following example.  We will create an “echo” service that is TCP based that will live outside of the service mesh.  The goal will be to restrict access to only allow authorized “good containers” to connect to the “echo” service via the BIG-IP.  The steps involved.

  1. Retrieve/Create certificates trusted by Istio
  2. Configure external proxy (BIG-IP) to use trusted certificates and only trust Istio certificates
  3. Add policy to external proxy to only allow “good containers” to connect
  4. Register BIG-IP device as a member of the Istio service mesh
  5. Verify that “good container” can connect to “echo” and “bad container” cannot

First we install a set of certificates on the BIG-IP that Envoy will trust and configure the BIG-IP to only allow connections from Istio.  The certs could either be pulled directly from Kubernetes (similar to setting up mesh expansion) or generated by a common CA that is trusted by Istio (similar to multicluster service mesh).

Once the certs are retrieved/generated we install them onto the proxy, BIG-IP, and configure the device to only trust client side certificates that are generated by Istio.

To enable a policy to validate the identity of the “good container” we will inspect the X509 Subject Alternative Name fields of the client certificate to inspect the spiffe name that contains the identity of the container.

Once the external proxy is configured we can register the device using “istioctl register” (similar to mesh expansion).

To verify that our test scenario is working we will have two namespaces “default” and “trusted”.  Connections from “trusted” will be allowed and “default” will be reject.  From each namespace we create a pod and run the command “nc bigip.default.svc.cluster.local 9000”.  Looking at our BIG-IP logs we can verify that our policy (iRule) worked:

Mar 25 18:56:39 ip-10-1-1-7 info tmm5[17954]: Rule /Common/log_cert <CLIENTSSL_CLIENTCERT>: allowing: spiffe://cluster.local/ns/trusted/sa/sleep
Mar 25 18:57:00 ip-10-1-1-7 info tmm2[17954]: Rule /Common/log_cert <CLIENTSSL_CLIENTCERT>: rejecting spiffe://cluster.local/ns/default/sa/default

Connection from our “good container”

/ # nc bigip.default.svc.cluster.local
9000
hi
HI

Connection from our “bad container”

# nc bigip.default.svc.cluster.local 9000

In the case of the “bad container” we are unable to connect.  The “nc”, netcat, command is simulating a very basic TCP client.  A more realistic example would be connecting to an external database that contains sensitive data.  In the “good” example we are echo’ing back the capitalized input (“hi” becomes “HI”).

Just One Example

In this article we looked at expanding a service mesh without Envoy.  This was focused on egress TCP traffic, but it could be expanded to:

  • Using BIG-IP as an SNI proxy instead of NGINX
  • Securing inbound traffic using mTLS and/or JWT tokens
  • Using BIG-IP as an ingress gateway
  • Using ServiceEntry/DestinationRules instead of registered service

If you want to see the process in action, check out this short video walkthrough.

https://youtu.be/83GdmwTvWLI

Let me know in the comments whether you’re interested in any of these use-cases or come-up with your own.  Thank you!


Why Service Meshes, Orchestrators Are Do or Die for Cloud Native Deployments

The self-contained, ephemeral nature of microservices comes with some serious upside, but keeping track of every single one is a challenge, especially when trying to figure out how the rest are affected when a single microservice goes down. The end result is that if you’re operating or developing in a microservices architecture, there’s a good chance part of your days are spent wondering what the hell your services are up to.

With the adoption of microservices, problems also emerge due to the sheer number of services that exist in large systems. Problems like security, load balancing, monitoring and rate limiting that had to be solved once for a monolith, now have to be handled separately for each service.

The good news is that engineers love a good challenge. And almost as quickly as they are creating new problems with microservices, they are addressing those problems with emerging microservices tools and technology patterns. Maybe the emergence of microservices is just a smart play by engineers to ensure job security.

Today’s cloud native darling, Kubernetes, eases many of the challenges that come with microservices. Auto-scheduling, horizontal scaling and service discovery solve the majority of build-and-deploy problems you’ll encounter with microservices.

What Kubernetes leaves unsolved is a few key containerized application runtime issues. That’s where a service mesh steps in. Let’s take a look at what Kubernetes provides, and how Istio adds to Kubernetes to solve the microservices runtime issues.

Kubernetes Solves Build-and-Deploy Challenges

Managing microservices at runtime is a major challenge. A service mesh helps alleviate this challenge by providing observability, control and security for your containerized applications. Aspen Mesh is the fully supported distribution of Istio that makes service mesh simple and enterprise-ready.

Kubernetes supports a microservice architecture by enabling developers to abstract away the functionality of a set of pods, and expose services to other developers through a well-defined API. Kubernetes enables L4 load balancing, but it doesn’t help with higher-level problems, such as L7 metrics, traffic splitting, rate limiting and circuit breaking.

Service Mesh Addresses Challenges of Managing Traffic at Runtime

Service mesh helps address many of the challenges that arise when your application is being consumed by the end user. Being able to monitor what services are communicating with each other, if those communications are secure and being able to control the service-to-service communication in your clusters are key to ensuring applications are running securely and resiliently.

Istio also provides a consistent view across a microservices architecture by generating uniform metrics throughout. It removes the need to reconcile different types of metrics emitted by various runtime agents, or add arbitrary agents to gather metrics for legacy un-instrumented apps. It adds a level of observability across your polyglot services and clusters that is unachievable at such a fine-grained level with any other tool.

Istio also adds a much deeper level of security. While Kubernetes only provides basic secret distribution and control-plane certificate management, Istio provides mTLS capabilities so you can encrypt on the wire traffic to ensure your service-to-service communications are secure.

A Match Made in Heaven

Pairing Kubernetes with a service mesh-like Istio gives you the best of both worlds and since Istio was made to run on Kubernetes, the two work together seamlessly. You can use Kubernetes to manage all of your build and deploy needs and Istio takes care of the important runtime issues.

Kubernetes has matured to a point that most enterprises are using it for container orchestration. Currently, there are 74 CNCF-certified service providers — which is a testament to the fact that there is a large and growing market. I see Istio as an extension of Kubernetes and a next step to solving more challenges in what feels like a single package.

Already, Istio is quickly maturing and is starting to see more adoption in the enterprise. It’s likely that in 2019 we will see Istio emerge as the service mesh standard for enterprises in much the same way Kubernetes has emerged as the standard for container orchestration.


Running Stateful Apps with Service Mesh: Kubernetes Cassandra with Istio mTLS Enabled

Cassandra is a popular, heavy-load, highly performant, distributed NoSQL database.  It is fully integrated into many mainstay cloud and cloud-native architectures. At companies such as Netflix and Spotify, Cassandra clusters provide continuous availability, fault tolerance, resiliency and scalability.

Critical and sensitive data is sent to and from a Cassandra database.  When deployed in a Kubernetes environment, ensuring the data is secure and encrypted is a must.  Understanding data patterns and performance latencies across nodes becomes essential, as your Cassandra environment spans multiple datacenters and cloud vendors.

A service mesh provides service visibility, distributed tracing, and mTLS encryption.  

While it’s true Cassandra provides its own TLS encryption, one of the compelling features of Istio is the ability to uniformly administer mTLS for all of your services.  With a service mesh, you can set up an easy and consistent policy where Istio automatically manages the certificate rotation. Pulling Cassandra into a service mesh pairs capabilities of the two technologies in a way that makes running stateless services much easier.

In this blog, I’ll cover the steps necessary to configure Istio with mTLS enabled in a Kubernetes Cassandra environment.  We’ve collected some information from the Istio community, did some testing ourselves and pieced together a workable solution.  One of the benefits you get with Aspen Mesh is our Istio expertise from running Istio in production for the past 18 months.  We are tightly engaged with the Istio community and continually testing and working out the kinks of upstream Istio. We’re here to help you with your service mesh path to production!

Let’s consider how Cassandra operates.  To achieve continuous availability, Cassandra uses a “ring” communication approach.  Meaning each node communicates continually with the other existing nodes. For Cassandra’s node consensus, the nodes send metadata to several nodes through a service called a Gossip.  The receiving nodes then “gossip” to all the additional nodes. This Gossip protocol is similar to a TCP three-way handshake, and all of the metadata, like heartbeat state, node status, location, etc… is messaged across nodes via IP address:port.

In a Kubernetes deployment, Cassandra nodes are deployed as StatefulSets to ensure the allocated number of Cassandra nodes are available at all times. Persistent volumes are associated with the Cassandra StatefulSets, and a headless service is created to ensure a stable network ID.  This allows Kubernetes to restart a pod on another node and transfer its state seamlessly to the new node.

Now, here’s where it gets tricky.  When implementing an Istio service mesh with mTLS enabled, the Envoy sidecar intercepts all of the traffic from the Cassandra nodes, verifies where it’s coming from, decrypts and sends the payload to the Cassandra pod through an internal loopback address.   The Cassandra nodes are all listening on their Pod IPs for gossip. However, Envoy is forwarding only to 127.0.0.1, where Cassandra isn't listening. Let’s walk through how to solve this issue.

Setting up the Mesh:

We used the cassandra:v13 image from the Google repo for our Kubernetes Cassandra environment. There are a few things you’ll need to ensure are included in the Cassandra manifest at the time of deployment.  Within the Cassandra service, you'll need to set it to a headless service, or set clusterIP: None, and you have to allow some additional ports/port-names that Cassandra service will need to communicate with:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: cassandra
  namespace: cassandra
  name: cassandra
spec:
  clusterIP: None
  ports:
  - name: tcp-client
    port: 9042
  - port: 7000
    name: tcp-intra-node
  - port: 7001
    name: tcp-tls-intra-node
  - port: 7199
    name: tcp-jmx
  selector:
    app: cassandra

The next step is to tell each Cassandra node to listen to the Envoy loopback address.  

This image, by default, sets Cassandra’s listener to the Kubernetes Pod IP.  The listener address will need to be set to the localhost loopback address. This allows the Envoy sidecar to pass communication through to the Cassandra nodes.

To enable this you will need to change the config file for Cassandra or the cassandra.yaml.

We did this by adding a substitution to our Kubernetes Cassandra manifest based on the Istio bug:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: cassandra
  name: cassandra
  labels:
    app: cassandra
spec:
  serviceName: cassandra
  replicas: 3
  selector:
    matchLabels:
      app: cassandra
  template:
    metadata:
      labels:
        app: cassandra
    spec:
      terminationGracePeriodSeconds: 1800
      containers:
      - name: cassandra
        image: gcr.io/google-samples/cassandra:v13
        command: [ "/usr/bin/dumb-init", "/bin/bash", "-c", "sed -i 's/^CASSANDRA_LISTEN_ADDRESS=.*/CASSANDRA_LISTEN_ADDRESS=\"127.0.0.1\"/' /run.sh && /run.sh" ]
        imagePullPolicy: Always
        ports:
        - containerPort: 7000
          name: intra-node
        - containerPort: 7001
          name: tls-intra-node
        - containerPort: 7199
          name: jmx
        - containerPort: 9042

This simple change uses sed to patch the cassandra startup script to listen on localhost.  

If you're not using the google-samples/cassandra container you should modify your Cassandra config or container to set the listen_address to 127.0.0.1.  For some containers, this may already be the default.

You'll need to remove any ServiceEntry or VirtualService resources associated with the Cassandra deployment as no additional specified routing entries or rules are necessary.  Nothing external is needed to communicate, Cassandra is now inside the mesh and communication will simply pass through to each node.

Since the clusterIP is set to none for the Cassandra Service will be configured as a headless service (i.e. setting the clusterIP: None) a DestinationRule does not need to be added.  When there is no clusterIP assigned, Istio defines load balancing mode as PASSTHROUGH by default.

If you are using Aspen Mesh, the global meshpolicy has mTLS enabled by default, so no changes are necessary.

$ kubectl edit meshpolicy default -o yaml
apiVersion: authentication.istio.io/v1alpha1
kind: MeshPolicy
.
. #edited out
.
spec:
  peers:
  - mtls: {}

Finally, create a Cassandra namespace, enable automatic sidecar injection and deploy Cassandra.

$ kubectl create namespace cassandra
$ kubectl label namespace cassandra istio-injection=enabled
$ kubectl -n cassandra apply -f <Cassandra-manifest>.yaml

Here is the output that shows the Cassandra nodes running with Istio sidecars.

$ kubectl get pods -n cassandra                                                                                   
NAME                     READY     STATUS    RESTARTS   AGE
cassandra-0              2/2       Running   0          22m
cassandra-1              2/2       Running   0          21m
cassandra-2              2/2       Running   0          20m
cqlsh-5d648594cb-86rq9   2/2       Running   0          2h

Here is the output validating mTLS is enabled.

$ istioctl authn tls-check cassandra.cassandra.svc.cluster.local

 
HOST:PORT           STATUS     SERVER     CLIENT     AUTHN POLICY     DESTINATION RULE
cassandra...:7000       OK       mTLS       mTLS         default/ default/istio-system

Here is the output validating the Cassandra nodes are communicating with each other and able to establish load-balancing policies.

$ kubectl exec -it -n cassandra cassandra-0 -c cassandra -- nodetool status
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load     Tokens  Owns (effective)  Host ID            Rack
UN  100.96.1.225  129.92 KiB  32   71.8%       f65e8c93-85d7-4b8b-ae82-66f26b36d5fd Rack1-K8Demo
UN  100.96.3.51   157.68 KiB  32   55.4%       57679164-f95f-45f2-a0d6-856c62874620  Rack1-K8Demo
UN  100.96.4.59   142.07 KiB  32   72.8%       cc4d56c7-9931-4a9b-8d6a-d7db8c4ea67b  Rack1-K8Demo

If this is a solution that can make things easier in your environment, sign up for the free Aspen Mesh Beta.  It will guide you through an automated Istio installation, then you can install Cassandra using the manifest covered in this blog, which can be found here.   


Advancing the promise of service mesh: Why I work at Aspen Mesh

The themes and content expressed are mine alone, with helpful insight and thoughts from my colleagues, and are about software development in a business setting.

I’ve been working at Aspen Mesh for a little over a month and during that time numerous people have asked me why I chose to work here, given the opportunities in Boulder and the Front Range.

To answer that question, I need to talk a bit about my background. I’ve been a professional software developer for about 13 years now. During that time I’ve primarily worked on the back-end for distributed systems and have seen numerous approaches to the same problems with various pros and cons. When I take a step back, though, a lot of the major issues that I’ve seen are related to deployment and configuration around service communication:
How do I add a new service to an already existing system? How big, in scope, should these services be? Do I use a message broker? How do I handle discoverability, high availability and fault tolerance? How, and in what format, should data be exchanged between services? How do I audit the system when the system inevitably comes under scrutiny?

I’ve seen many different approaches to these problems. In fact, there are so many approaches, some orthogonal and some similar, that software developers can easily get lost. While the themes are constant, it is time consuming for developers to get up to speed with all of these technologies. There isn’t a single solution that solves every common problem seen in the backend; I’m sure the same applies to the front-end as well. It’s hard to truly understand the pros and cons of an approach until you have a working system; and when that happens and if you then realize that the cons outweigh the pros, it may be difficult and costly to get back to where you started (see sunk cost fallacy and opportunity cost). Conversely, analysis paralysis is also costly to an organization, both in terms of capital—software developers are not cheap—and an inability to quickly adapt to market pressures, be it customer needs and requirements or a competitor that is disrupting the market.

Yet the hype cycle continues. There is always a new shiny thing taking the software world by storm. You see it in discussions on languages, frameworks, databases, messaging protocols, architectures ad infinitum. Separating the wheat from the chaff is something developers must do to ensure they are able to meet their obligations. But with the signal to noise ratio being high at times and with looming deadlines not all possibilities can be explored.  

So as software developers, we have an obligation of due diligence and to be able to deliver software that provides customer value; that helps customers get their work done and doesn’t impede them, but enables them. Most customers don’t care about which languages you use or which databases you use or how you build your software or what software process methodology you adhere to, if any. They just want the software you provide to enable them to do their work. In fact, that sentiment is so strong that slogans have been made around it.

So what do customers care about, generally speaking? They care about access to their data, how they can view it and modify it and draw value from it. It should look and feel modern, but even that isn’t a strict requirement. It should be simple to use for a novice, but yet provide enough advanced capability to help your most advanced users make you learn something new about the tool you’ve created. This is information technology after all. Technology for technology’s sake is not a useful outcome.

Any work that detracts from adding customer value needs to be deprioritized, as there is always more work to do than hours in the day. As developers, it’s our job to be knee deep in the weeds so it’s easy to lose sight of that; unit testing, automation, language choice, cloud provider, software process methodology, etc… absolutely matter, but that they are a means to an end.

With that in mind, let’s create a goal: application developers should be application developers.

Not DevOps engineers, or SREs or CSRs, or any other myriad of roles they are often asked to take on. I’ve seen my peers happiest when they are solving difficult problems and challenging themselves. Not when they are figuring out what magic configuration setting is breaking the platform. Command over their domain and the ability and permission to “fix it” is important to almost every appdev.

If developers are expensive to hire, train, replace and keep then they need to be enabled to do their job to the best of their ability. If a distributed, microservices platform has led your team to solving issues in the fashion of Sherlock Holmes solving his latest mystery, then perhaps you need a different approach.

Enter Istio and Aspen Mesh

It’s hard to know where the industry is with respect to the Hype Cycle for technologies like microservices, container orchestration, service mesh and a myriad of other choices; this isn’t an exact science where we can empirically take measurements. Most companies have older, but proven, systems built on LAMP or Java application servers or monoliths or applications that run on a big iron system. Those aren’t going away anytime soon, and developers will need to continue to support and add new features and capabilities to these applications.

Any new technology must provide a path for people to migrate their existing systems to something new.

If you have decided to or are moving towards a microservice architecture, even if you have a monolith, implementing a service mesh should be among the possibilities explored. If you already have a microservice architecture that leverages gRPC or HTTP, and you're using Kubernetes then the benefits of a service mesh can be quickly realized. It's easy to sign up for our beta and install Aspen Mesh and the sample bookinfo application to see things in action. Once I did is when I became a true believer. Not being coupled with a particular cloud provider, but being flexible and able to choose where and how things are deployed empowers developers and companies to make their own choices.

Over the past month I’ve been able to quickly write application code and get it delivered faster than ever before; that is in large part due to the platform my colleagues have built on top of Kubernetes and Istio. I’ve been impressed by how easy a well built cloud-native architecture can make things, and learning more about where Aspen Mesh, Istio and Kubernetes are heading gives me confidence that community and adoption will continue to grow.

As someone that has dealt with distributed systems issues continuously throughout his career, I know managing and troubleshooting a distributed system can be exhausting. I just want to enable others, even Aspen Mesh as we dogfood our own software, to do their jobs. To enable developers to add value and solve difficult problems. To enable a company to monitor their systems, whether it be mission critical or a simple CRUD application, to help ensure high uptime and responsiveness. To enable systems to be easily auditable when the compliance personnel has GRDP, PCI DSS or HIPAA concerns. To enable developers to quickly diagnose issues within their own system, fix them and monitor the change. To enable developers to understand how their services are communicating with each other--if it’s an n-tier system or a spider’s web--and how requests propagate through their system.

The value of Istio and the benefits of Aspen Mesh in solving these challenges is what drew me here. The opportunities are abundant and fruitful. I get to program in go, in a SaaS environment and on a small team with a solid architecture. I am looking forward to becoming a part of the larger CNCF community. With microservices and cloud computing no longer being niche--which I’d argue hasn’t been the case for years--and with businesses adopting these new technology patterns quickly, I feel as if I made the right long-term career choice.


Top 3 Service Mesh Developments in 2019

Last year was about service mesh evaluation, trialing — and even hype.

While the interest in service mesh as a technology pattern was very high, it was mostly about evaluation and did not see widespread adoption. The capabilities service mesh can add to ease managing microservice-based applications at runtime are obvious, but the technology still needs to reach maturity before gaining widespread production adoption.

What we can say is service mesh adoption should evolve from the hype stage in a very real way this year.

What can we expect to see in 2019?

  1. The evolution and coalescing of service mesh as a technology pattern;
  2. The evolution of Istio as the way enterprises choose to implement service mesh;
  3. Clear uses cases that lead to wider adoption.

The Evolution of Service Mesh

There are several service mesh architectural options when it comes to service mesh, but undoubtedly, the sidecar architecture will see the most widespread usage in 2019. Sidecar proxy as the architectural pattern, and more specifically, Envoy as the technology, have emerged as clear winners for how the majority will implement service mesh.

Considering control plane service meshes, we have seen the space coalesce around leveraging sidecar proxies. Linkerd, with its merging of Conduit and release of Linkerd 2, got on the sidecar train. And the original sidecar control plane mesh, Istio, certainly has the most momentum in the cloud native space. A look at the Istio Github repo shows:

  • 14,500 stars;
  • 6,400 commits;
  • 300 contributors.

And if these numbers don’t clearly demonstrate the momentum of the project, just consider the number of companies building around Istio:

  • Aspen Mesh;
  • Avi Networks;
  • Cisco;
  • OpenShift;
  • NGINX;
  • Rancher;
  • Tufin Orca;
  • Tigera;
  • Twistlock;
  • VMware.

The Evolution of Istio

So the big question is where is the Istio project headed in 2019? I should start with the disclaimer that the following are all guesses. — they are well-informed guesses, but guesses nonetheless.

Community Growth

Now that Istio has hit 1.0, the number of contributors outside the core Google and IBM team are starting to grow. I’d hazard the guess that Istio will be truly stable around 1.3 sometime in June or July. Once the project gets to the point it is usable at scale in production, I think you’ll really see it take off.

Emerging Vendor Landscape

At Aspen Mesh, we hedged our bets on Istio 18 months ago. It seems to be becoming clear that Istio will win service mesh in much the same way Kubernetes has won container orchestration.

Istio is a powerful toolbox that directly addresses many microservices challenges that are being solved with multiple manual processes, or are not being solved at all. The power of the open source community surrounding it also seems to be a factor that will lead to widespread adoption. As this becomes clearer, the number of companies building on Istio and building Istio integrations will increase.

Istio Will Join the Cloud Native Computing Foundation

Total guess here, but I’d bet on this happening in 2019. CNCF has proven to be an effective steward of cloud-native open source projects. I think this will also be a key to widespread adoption which will be key to the long-term success of Istio. We shall see what the project founders decide, but this move will benefit everyone once the Istio project is at the point it makes sense for it to become a CNCF project.

Real-World Use Cases Are Key To Spreading Adoption

Service mesh is still a nascent market and in the next 12-24 months, we should see the market expand past just the early adopters. But for those who have been paying attention, the why of a service mesh has largely been answered. The whyis also certain to evolve, but for now, the reasons to implement a service mesh are clear. I think that large parts of the how are falling into place, but more will emerge as service mesh encounters real-world use cases in 2019.

I think what remains unanswered is “what are the real world benefits I am going to see when I put this into practice”? This is not a new question around an emerging technology. Neither will the way this question gets answered be anything new: and that will be through uses cases. I can’t emphasize enough how use cases based on actual users will be key.

Service mesh is a powerful toolbox, but only a small swath of users will care about how cool the tech is. The rest will want to know what problems it solves.

I predict 2019 will be the year of service mesh use cases that will naturally emerge as the number of adopters increases and begins to talk about the value they are getting with a service mesh.

Some Final Thoughts

If you are already using a service mesh, you understand the value it brings. If you’re considering a service mesh, pay close attention to this space and the number of uses cases will make the real world value proposition more clear. And if you’re not yet decided on whether or not you need a service mesh, check out the recent Gartner451 and IDC reports on microservices — all of which say a service mesh will be mandatory by 2020 for any organization running microservices in production.


Using Kubernetes RBAC to Control Global Configuration In Istio

Why Configuration Matters

If I'm going to get an error for my code, I like to get an error as soon as possible.  Unit test failures are better than integration test failures. I prefer compiler errors to unit test failures - that's what makes TypeScript great.  Going even further, a syntax highlighter is a very proximate feedback of errors - if that keyword doesn't turn green, I can fix the spelling with almost no conscious burden.

Shifting from coding to configuration, I like narrow configuration systems that make it easy to specify a valid configuration and tell me as quickly as possible when I've done something wrong.  In fact, my dream configuration specification wouldn't allow me to specify something invalid. Remember Newspeak from 1984?  A language so narrow that expressing ungood thoughts become impossible.  Apparently, I like my programming and configuration languages to be dystopic and Orwellian.

If you can't further narrow the language of your configuration without giving away core functionality, the next best option is to narrow the scope of configuration.  I think about this in reverse - if I am observing some behavior from the system and say to myself, "Huh, that's weird", how much configuration do I have to look at before I understand?  It's great if this is a small and expanding ring - look at config that's very local to this particular object, then the next layer up (in Kubernetes, maybe the rest of the namespace), and so on to what is hopefully a very small set of global config.  Ever tried to debug a program with global variables all over the place? Not fun.

My three principles of ideal config:

  1. Narrow Like Newspeak, don't allow me to even think of invalid configuration.
  2. Scope Only let me affect a small set of things associated with my role.
  3. Time Tell me as early as possible if it's broken.

Declarative config is readily narrow.  The core philosophy of declarative config is saying what you want, not how you get it.  For example, "we'll meet at the Denver Zoo at noon" is declarative. If instead I specify driving directions to the Denver Zoo, I'm taking a much more imperative approach.  What if you want to bike there? What if there is road construction and a detour is required? The value of declarative config is that if we focus on what we want, instead of how to get it, it's easier for me to bring my controller (my car's GPS) and you to bring yours (Google Maps in the Bike setting).

On the other hand, a big part of configuration is pulling together a bunch of disparate pieces of information together at the last moment, from a bunch of different roles (think humans, other configuration systems and controllers) just before the system actually starts running.  Some amount of flexibility is required here.

Does Cloud Native Get Us To Better Configuration?

I think a key reason for the popularity of Kubernetes is that it has a great syntax for specifying what a healthy, running microservice looks like.  Its syntax is powerful enough in all the right places to be practical for infrastructure.

Service meshes like Istio robustly connect all the microservices running in your cluster.  They can adaptively route L7 traffic, provide end-to-end mTLS based encryption, and provide circuit breaking and fault injection.  The long feature list is great, but it's not surprising that the result is a somewhat complex set of configuration resources. It's a natural result of the need for powerful syntax to support disparate use cases coupled with rapid development.

Enabling Fine-grained RBAC with Traffic Claim Enforcer

At Aspen Mesh, we found users (including ourselves) spending too much time understanding misconfiguration.  The first way we addressed that problem was with Istio Vet, which is designed to warn you of probably incorrect or incomplete config, and provide guidance to fix it.  Sometimes we know enough that we can prevent the misconfiguration by refusing to allow it in the first place.  For some Istio config resources, we do that using a solution we call Traffic Claim Enforcer.

There are four Istio configuration resources that have global implications: VirtualService, Gateway, ServiceEntry and DestinationRule.  Whenever you create one of these resources, you create it in a particular namespace. They can affect how traffic flows through the service mesh to any target they specify, even if that target isn't in the current namespace.  This surfaces a scope anti-pattern - if I'm observing weird behavior for some service, I have to examine potentially all DestinationRules in the entire Kubernetes cluster to understand why.

That might work in the lab, but we found it to be a serious problem for applications running in production.  Not only is it hard to understand the current config state of the system, it's also easy to break. It’s important to have guardrails that make it so the worst thing I can mess up when deploying my tiny microservice is my tiny microservice.  I don't want the power to mess up anything else, thank you very much. I don't want sudo. My platform lead really doesn't want me to have sudo.

Traffic Claim Enforcer is an admission webhook that waits for a user to configure one of those resources with global implications, and before allowing will check:

  1. Does the resource have a narrow scope that affects only local things?
  2. Is there a TrafficClaim that grants the resource the broader scope requested?

A TrafficClaim is a new Kubernetes custom resource we defined that exists solely to narrow and define the scope of resources in a namespace.  Here are some examples:

kind: TrafficClaim
apiVersion: networking.aspenmesh.io/v1alpha3
metadata:
  name: allow-public
  namespace: cluster-public
claims:
# Anything on www.example.com
- hosts: [ "www.example.com" ]

# Only specific paths on foo.com, bar.com
- hosts: [ "foo.com", "bar.com" ]
  ports: [ 80, 443, 8080 ]
  http:
    paths:
      exact: [ "/admin/login" ]
      prefix: [ "/products" ]

# An external service controlled by ServiceEntries
- hosts: [ "my.external.com" ]
  ports: [ 80, 443, 8080, 8443 ]

TrafficClaims are controlled by Kubernetes Role-Based Access Control (RBAC).  Generally, the same roles or people that create namespaces and set up projects would also create TrafficClaims for those namespaces that need power to define service mesh traffic policy outside of their namespace scope.  Rule 1 about local scope above can be explained as "every namespace has an implied TrafficClaim for namespace-local traffic policy", to avoid requiring a boilerplate TrafficClaim.

A pattern we use is to put global config into a namespace like "istio-public" - that's the only place that needs TrafficClaims for things like public DNS names.  Or you might have a couple of namespaces like "istio-public-prod" and "istio-public-dev" or similar. It’s up to you.

Traffic Claim Enforcer does not prevent you from thinking of invalid config, but it does help to limit scope. If I'm trying to understand what happens when traffic goes to my microservice, I no longer have to examine every DestinationRule in the system.  I only have to examine the ones in my namespace, and maybe some others that have special TrafficClaims (and hopefully keep that list small).

Traffic Claim Enforcer also provides an early failure for config problems.  Without it, it is easy to create conflicting DestinationRules even in separate namespaces. This is a problem that Istio-Vet will tell you about, but cannot fix - it doesn't know which one should have priority. If you define TrafficClaims, then Traffic Claim Enforcer can prevent you from configuring it at all.

Hat tip to my colleague, Brian Marshall, who developed the initial public spec for TrafficClaims.  The Istio community is undertaking a great deal of work to scope/manage config aimed at improving system scalability.  We made Traffic Claim Enforcer with a focus on scoping to improve the human config experience as it was a need expressed by several of our users.  We're optimistic that the way Traffic Claim Enforcer helps with human concerns will complement the system scalability side of things.

If you want to give Traffic Claim Enforcer a spin, it's included as part of Aspen Mesh.  By default it doesn't enforce anything, so out-of-the-box it is compatible with Istio. You can turn it on globally or on a namespace-by-namespace basis.

Click play below to check it out in action!

https://youtu.be/47HzynDsD8w