Securing Containerized Applications With Service Mesh

The self-contained, ephemeral nature of microservices comes with some serious upside, but keeping track of every single one is a challenge, especially when trying to figure out how the rest are affected when a single microservice goes down. The end result is that if you’re operating or developing a microservices architecture, there’s a good chance part of your days are spent wondering what your services are up to.

With the adoption of microservices, problems also emerge due to the sheer number of services that exist in large systems. Problems like security, load balancing, monitoring and rate limiting that had to be solved once for a monolith, now have to be handled separately for each service.

The technology aimed at addressing these microservice challenges has been  rapidly evolving:

  1. Containers facilitate the shift from monolith to microservices by enabling independence between applications and infrastructure.
  2. Container orchestration tools solve microservices build and deploy issues, but leave many unsolved runtime challenges.
  3. Service mesh addresses runtime issues including service discovery, load balancing, routing and observability.

Securing services with a service mesh

A service mesh provides an advanced toolbox that lets users add security, stability and resiliency to containerized applications. One of the more common applications of a service mesh is bolstering cluster security. There are 3 distinct capabilities provided by the mesh that enable platform owners to create a more secure architecture.

Traffic Encryption  

As a platform operator, I need to provide encryption between services in the mesh. I want to leverage mTLS to encrypt traffic between services. I want the mesh to automatically encrypt and decrypt requests and responses, so I can remove that burden from my application developers. I also want it to improve performance by prioritizing the reuse of existing connections, reducing the need for the computationally expensive creation of new ones. I also want to be able to understand and enforce how services are communicating and prove it cryptographically.

Security at the Edge

As a platform operator, I want Aspen Mesh to add a layer of security at the perimeter of my clusters so I can monitor and address compromising traffic as it enters the mesh. I can use the built in power of Kubernetes as an ingress controller to add security with ingress rules such as whitelisting and blacklisting. I can also apply service mesh route rules to manage compromising traffic at the edge. I also want control over egress so I can dictate that our network traffic does not go places it shouldn't (blacklist by default and only talk to what you whitelist).

Role Based Access Control (RBAC)

As the platform operator, It’s important that I am able to provide the level of least privilege so the developers on my platform only have access to what they need, and nothing more. I want to enable controls so app developers can write policy for their apps and only their apps so that they can move quickly without impacting other teams. I want to use the same RBAC framework that I am familiar with to provide fine-grained RBAC within my service mesh.

How a service mesh adds security

You’re probably thinking to yourself, traffic encryption and fine-grained RBAC sound great, but how does a service mesh actually get me to them? Service meshes that leverage a sidecar approach are uniquely positioned intercept and encrypt data. A sidecar proxy is a prime insertion point to ensure that every service in a cluster is secured, and being monitored in real-time. Let’s explore some details around why sidecars are a great place for security.

Sidecar is a great place for security

Securing applications and infrastructure has always been daunting, in part because the adage really is true: you are only as secure as your weakest link.  Microservices are an opportunity to improve your security posture but can also cut the other way, presenting challenges around consistency.  For example, the best organizations use the principle of least privilege: an app should only have the minimum amount of permissions and privilege it needs to get its job done.  That's easier to apply where a small, single-purpose microservice has clear and narrowly-scoped API contracts.  But there's a risk that as application count increases (lots of smaller apps), this principle can be unevenly applied. Microservices, when managed properly, increase feature velocity and enable security teams to fulfill their charter without becoming the Department of No.

There's tension: Move fast, but don't let security coverage slip through the cracks.  Prefer many smaller things to one big monolith, but secure each and every one.  Let each team pick the language of their choice, but protect them with a consistent security policy.  Encourage app teams to debug, observe and maintain their own apps but encrypt all service-to-service communication.

A sidecar is a great way to balance these tensions with an architecturally sound security posture.  Sidecar-based service meshes like Istio and Linkerd 2.0 put their datapath functionality into a separate container and then situate that container as close to the application they are protecting as possible.  In Kubernetes, the sidecar container and the application container live in the same Kubernetes Pod, so the communication path between sidecar and app is protected inside the pod's network namespace; by default it isn't visible to the host or other network namespaces on the system.  The app, the sidecar and the operating system kernel are involved in communication over this path.  Compared to putting the security functionality in a library, using a sidecar adds the surface area of kernel loopback networking inside of a namespace, instead of just kernel memory management.  This is additional surface area, but not much.

The major drawbacks of library approaches are consistency and sprawl in polyglot environments.  If you have a few different languages or application frameworks and take the library approach, you have to secure each one.  This is not impossible, but it's a lot of work.  For each different language or framework, you get or choose a TLS implementation (perhaps choosing between OpenSSL and BoringSSL).  You need a configuration layer to load certificates and keys from somewhere and safely pass them down to the TLS implementation.  You need to reload these certs and rotate them.  You need to evaluate "information leakage" paths: does your config parser log errors in plaintext (so it by default might print the TLS key to the logs)?  Is it OK for app core dumps to contain these keys?  How often does your organization require re-keying on a connection?  By bytes or time or both?  Minimum cipher strength?  When a CVE in OpenSSL comes out, what apps are using that version and need updating?  Who on each app team is responsible for updating OpenSSL, and how quickly can they do it?  How many apps have a certificate chain built into them for consuming public websites even if they are internal-only?  How many Dockerfiles will you need to update the next time a public signing authority has to revoke one?  slowloris?

Your organization can do all this work.  In fact, parts probably already have - above is our list of painful app security experiences but you probably have your own additions.  It is a lot of cross-organizational effort and process to get it right.  And you have to get it right everywhere, or your weakest link will be exploited.  Now with microservices, you have even more places to get it right.  Instead, our advice is to focus on getting it right once in the sidecar, and then distributing the sidecar everywhere, and get back to adding business value instead of duplicating effort.

There are some interesting developments on the horizon like the use of kernel TLS to defer bulk and some asymmetric crypto operations to the kernel.  That's great:  Implementations should change and evolve.  The first step is providing a good abstraction so that apps can delegate to lower layers. Once that's solid, it's straightforward to move functionality from one layer to the next as needed by use case, because you don't perturb the app any more.  As precedent, consider TCP Segmentation Offload, which lets the network card manage splitting app data into the correct size for each individual packet.  This task isn't impossible for an app to do, but it turns out to be wasted effort.  By deferring TCP segmentation to the kernel, it left the realm of the app.  Then, kernels, network drivers, and network cards were free to focus on the interoperability and semantics required to perform TCP segmentation at the right place.  That's our position for this higher-level service-to-service communication security: move it outside of the app to the sidecar, and then let sidecars, platforms, kernels and networking hardware iterate.

Envoy Is a Great Sidecar

We use Envoy as our sidecar because it's lightweight, has some great features and good API-based configurability.  Here are some of our favorite parts about Envoy:

  • Configurable TLS Parameters: Envoy exposes all the TLS configuration points you'd expect (cipher strength, protocol versions, curves).  The advantage to using Envoy is that they're configured the same way for every app using the sidecar.
  • Mutual TLS: Typically TLS is used to authenticate the server to the client, and to encrypt communication.  What's missing is authenticating the client to the server - if you do this, then the server knows what is talking to it.  Envoy supports this bi-directional authentication out of the box, which can easily be incorporated into a SPIFFE system.  In today's complex and cloud datacenter, you're better off if you trust things based on cryptographic proof of what they are, instead of network perimeter protection of where they called from.
  • BoringSSL: This fork of OpenSSL removed huge amounts of code like implementations of obsolete ciphers and cleaned up lots of vestigial implementation details that had repeatedly been the source of security vulnerabilities.  It's a good default choice if you don't need any OpenSSL-specific functionality because it's easier to get right.
  • Security Audit: A security audit can't prove the absence of vulnerabilities but it can catch mistakes that demonstrate either architectural weaknesses or implementation sloppiness.  Envoy's security audit did find issues but in our opinion indicated a high level of security health.
  • Fuzzed and Bountied: Envoy is continuously fuzzed (exposed to malformed input to see if it crashes) and covered by Google's Patch Reward security bug bounty program.
  • Good API Granularity: API-based configuration doesn't mean "just serialize/deserialize your internal state and go."  Careful APIs thoughtfully map to the "personas" of what's operating them (even if those personas are other programs).  Envoy's xDS APIs in our experience partition routing behavior from cluster membership from secrets.  This makes it easy to make well-partitioned controllers.  A knock-on benefit is that it is easy in our experience to debug and test Envoy because config constructs usually map pretty clearly to code constructs.
  • No garbage collector: There are great languages with automatic memory management like Go that we use every day.  But we find languages like C++ and Rust provide predictable and optimizable tail latency.
  • Native Extensibility via Filters: Envoy has layer 4 and layer 7 extension points via filters that are written in C++ and linked into Envoy.
  • Scripting Extensibility via Lua: You can write Lua scripts as extension points as well.  This is very convenient for rapid prototyping and debugging.

One of these benefits deserves an even deeper dive in a security-oriented discussion.  The API granularity of Envoy is based on a scheme called "xDS" which we think of as follows:  Logically split the Envoy config API based on the user of that API.  The user in this case is almost always some other program (not a human), for instance a Service Mesh control plane element.

For instance, in xDS listeners ("How should I get requests from users?") are separated from clusters ("What pods or servers are available to handle requests to the shoppingcart service?").  The "x" in "xDS" is replaced with whatever functionality is implemented ("LDS" for listener discovery service).  Our favorite security-related partitioning is that the Secret Discovery Service can be used for propagating secrets to the sidecars independent of the other xDS APIs.

Because SDS is separate, the control plane can implement the Principle of Least Privilege: nothing outside of SDS needs to handle or have access to any private key material.

Mutual TLS is a great enhancement to your security posture in a microservices environment.  We see mutual TLS adoption as gradual - almost any real-world app will have some containerized microservices ready to join the service mesh and mTLS on day one.  But practically speaking, many of these will depend on mesh-external services, containerized or not.  It is possible in most cases to integrate these services into the same trust domain as the service mesh, and oftentimes these components can even participate in client TLS authentication so you get true mutual TLS.

In our experience, this happens by gradually expanding the "circle" of things protected with mutual TLS.  First, stateless containerized business logic, next in-cluster third party services, finally external state stores like bare metal databases.  That's why we focus on making the state of mTLS easy to understand in Aspen Mesh, and provide assistants to help you detect configuration mishaps.

What lives outside the sidecar?

You need a control plane to configure all of these sidecars.  In some simple cases it may be tempting to do this with some CI integration to generate configs plus DNS-based discovery.  This is viable but it's hard to do rapid certificate rotation.  Also, it leaves out more dynamic techniques like canaries, progressive delivery and A/B testing.  For this reason, we think most real-world applications will include an online control plane that should:

  • Disseminate configuration to each of the sidecars with a scalable approach.
  • Rotate sidecar certificates rapidly to reduce the value to an attacker of a one-time exploit of an application.
  • Collect metadata on what is communicating with what.

A good security posture means you should be automating some work on top of the control plane. We think these things are important (and built them into Aspen Mesh):

  • Organizing information to help humans narrow in on problems quickly.
  • Warning on potential misconfigurations.
  • Alerting when unhealthy communication is observed.
  • Inspect the firehose of metadata for surprises - these patterns could be application bugs or security issues or both.

If you’re considering or going down the Kubernetes path, you should be thinking about the unique security challenges that comes with microservices running in a Kubernetes cluster. Kubernetes solves many of these, but there are some critical runtime issues that a service mesh can make easier and more secure. If you would like to talk about how the Aspen Mesh platform and team can address your specific security challenge, feel free to find some time to chat with us.


Announcing Aspen Mesh 1.1

Aspen Mesh release 1.1.3 is now out to address a critical security update. The Aspen Mesh release is based on security patches released in the Istio 1.1.3 release - you can read more about the update here. We recommend Aspen Mesh users running 1.1 immediately upgrade to 1.1.3.

Close on the heels of the much anticipated Istio 1.1 release, we are excited to announce the release of Aspen Mesh 1.1. Our latest release provides all the features of Istio plus the support of the Aspen Mesh platform and team, and additional features you need to operate in the enterprise.

As with previous Istio releases, the open source community has done a great job of creating a release with exciting new features, improved stability and enhanced performance. The aim of this blog is to distill what has changed in the new release and point out the few gotchas and known issues in an easy to consume format.  We often find that there are so many changes (kudos to the community for all the hard work!) that it is difficult for users to discern what pieces they should care about and what actions they need to take on the pieces they care about. Hopefully, this blog will help address a few of these issues.

Before we delve into the specifics, let’s focus on why release 1.1 was a big milestone for the Istio community and how things were handled differently compared to previous releases:

  • Quality was the major focus of this release. If you look at the history, you will notice that it took six release candidates to get the release out. The maintainers worked diligently to resolve tricky user-identified issues and address them correctly instead of getting the release out on a predefined date. We would see constant updates/PRs (even on weekends) to address these issues which is a testament to the dedication of the open source community.
  • User experience was a key area of focus in the community for this release. There was a new UX Working Group created to address various usability issues and to improve the user’s Istio journey from install to upgrade. We believe that this is a step in the right direction and will lead to easier Istio adoption. Aspen Mesh actively participates in these meetings with an eye on improving the experience of Istio and Aspen Mesh users.
  • Meaningful effort was put into improving the documentation, especially around consistent use of terminology.

It was great to see that the community listened to its users, addressed critical issues and didn’t rush to release 1.1. We look forward to how Project Mauve can further improve the engineering process, thereby improving the quality of Istio releases.

So, let’s move onto the exciting new features and improvements that are part of the Aspen Mesh 1.1 release.

Aspen Mesh 1.1 Features

Reduced sidecar memory usage
This was a long-standing issue that Istio users had faced when dealing with medium to large scale clusters. The Envoy sidecars’ memory consumption grew as new services and pods were deployed in the cluster resulting in a considerable memory footprint for each sidecar proxy. As these sidecars are part of every pod in the mesh this can quickly impact the scheduling and memory requirements for your cluster. In release 1.1, you can expect a significant reduction in the memory consumption by the sidecars. This benefit is primarily driven by reducing the set of statistics exposed by each sidecar. Previously, the sidecars were configured to expose metrics for every Envoy cluster, listener and HTTP connection manager which would increase the number of metrics reported roughly in proportion to the number of services and pods. In release 1.1, the set of metrics is now reduced to the cluster and listener managers (in addition to Istio specific stats) which always expose a fixed set of metrics. We found in our testing that the sidecar memory consumption is significantly lower compared to Aspen Mesh release 1.0.4 and we are looking forward to users being able to inject sidecars in more applications in their clusters.

New multi cluster support

Earlier versions of Istio supported multiple clusters via a single control plane topology. This meant that the Istio control plane would be deployed only on one cluster which would manage services on both local and remote clusters. Additionally, it required a flat network IP space for the pods to communicate across clusters. These restrictions limited real world uses of multi cluster functionality as the control plane could easily become a single point of failure and the flat IP space was not always feasible. In this release support was added for multiple control plane topology which provides the desired control plane availability and no restrictions on the IP layout. Networking across clusters is set up via the ingress gateways which rely on mTLS (common root Certificate Authority across clusters) to verify the peer traffic. We are excited to see new use cases emerge for multi cluster service mesh and how enterprises can leverage Aspen Mesh to truly build resilient and highly available applications deployed across clusters.   

CNI support
Istio by default sets up the pod traffic redirection to/from the sidecar proxy by injecting an init container which uses iptables under the hood. The ability to use iptables requires elevated permissions which is a hindrance to adopting Istio in various organizations due to compliance concerns. Istio and Aspen Mesh now support CNI as a new way to perform traffic redirection, removing the need for elevated permissions. It is great to see this enhancement as we think it is critical to have the principle of least privileges applied to the service mesh. We’re excited to be able to drive advanced compliance use cases with our customers over the next few months.

New sidecar resource
One of the biggest challenges users faced with the old releases was that all the sidecars in the mesh had configuration related to all the services in the cluster even though a particular sidecar proxy only needed to talk to a small subset of services. This resulted in excess churn as massive amounts of configuration were processed and transmitted to the sidecars with every configuration update. This caused intermittent request failures and CPU spikes in all the sidecars on any configuration change in the cluster. The 1.1 release added a new Sidecar resource to enable operators to configure the ingress and egress of each proxy. With this resource, users can control the scope and visibility of configuration distributed to the sidecars and attain better resource utilization and scalability of Istio components.

Apart from the aforementioned major changes, there are quite a few lesser known enhancements in this release which can be helpful in exploring Aspen Mesh capabilities.

Enabling end-user JWT authentication by path
Istio ingressgateway and sidecar proxies support decoding JWT provided by the end user and passing it to the applications as an HTTP request header. This has the operational benefit of isolating authentication from application code and instead using the service mesh infrastructure layer for these critical security operations. In earlier versions of Istio you could only enable/disable this feature on a per service or port basis but not for specific HTTP paths. This was very limiting especially for ingress gateways where you might have some paths requiring authentication and some that didn’t. In release 1.1, an experimental feature was added to enable end user JWT authentication based on request path.

New Helm installation options There are many new Helm installation options added in this release (in addition to the old ones) that are useful in customizing Aspen Mesh based on your needs. We often find that customer use cases are quite different and unique for every environment, so the addition of these options makes it easy to tailor service mesh to your needs. Some of the important new options are:

  • Node selector - Many of our customers want to install the control plane components on their own nodes for better monitoring, isolation and resilience. In this release there is an easy Helm option, global.defaultNodeSelector to achieve this functionality.
  • Tracing backend address - Users often have their tracing set up and want to easily add Istio on top to work with their existing tracing system. In the older version it was quite painful to provide a different tracing backend to Istio (used to be hardcoded to “zipkin.istio-system”). This release added a new “global.tracer.zipkin.address” Helm option to enable this functionality. If you’re an Aspen Mesh customer, we automatically set this up for you so that the traces are sent to the Aspen Mesh platform where you can access them via our hosted Jaeger service.
  • Customizable proxy access log format - The sidecar proxies in the older releases performed access logging in the default Envoy format. Even though the information is great, you might have access logging set up in other systems in your environment and want to have a uniform access logging format throughout your cluster for ease of parsing, searching and tooling. This new release supports a Helm option “global.proxy.accessLogFormat” for users to easily customize the logging format based on their environment.

This release also added many debugging enhancements which make it easy for users to operate and debug when running an Aspen Mesh cluster. Some critical enhancements in this area were:

Istioctl enhancements
Istioctl is a tool similar to kubectl for performing Istio specific operations which can aid in debugging and validating various Istio configuration and runtime issues. There were several enhancements made to this tool which are worth mentioning:

  • Verify install - Istioctl now supports an experimental command to verify the installation in your cluster. This is a great step for first time Istio users before you dive deeper into exploring all the Istio capabilities. If you’re an Aspen Mesh customer, our demo installer automatically does this step for you and lets you know if the installation was successful.
  • Static configuration validation - Istioctl supports a “validate” command for users to verify their configuration before applying it to their cluster. Using this effectively can prevent easy misconfigurations and surprises which can be hard to debug. Note that Galley now also performs validation and rejects configuration if it’s invalid in the new release. If you’re an Aspen Mesh customer, you can use this new functionality in addition to the automated runtime analysis we perform via istio-vet. We find that the static single resource validation is a good first step but an automated tool like istio-vet from Aspen Mesh which can perform runtime analysis across multiple resources is also needed to ensure a properly functioning mesh.
  • Proxy health status - Support was added to quickly inspect and verify the health status of proxy (default port 15020) which can be very useful in debugging request failures. We often found that users struggled in understanding what qualifies as a healthy Istio proxy (sidecar or gateways) and we think this can help to alleviate this issue.

Along with all of these great new improvements, there are a few gotchas or unexpected behaviors you might observe especially if you’re upgrading Istio from an older version. We’ve done a thorough investigation of these potential issues and are making sure our customers have a smooth transition with our releases. However, for the broader community let’s cover a few important gotchas to be aware of:

  • Access allowed to any external services by default - The new Istio release will by default allow access to any external service. In previous releases, all external traffic was blocked and required users to explicitly whitelist external services via ServiceEntry. This decision was reached by the community to make it easier for customers to add Istio on top of their existing deployments and not break working traffic. However, we think this is a major change that can lead to security escapes if you’re upgrading to this version. With that in mind, the Aspen Mesh distribution of the release will continue to block all external traffic by default. If you want to customize this setting, the Helm option “global.outboundTrafficPolicy.mode” can be updated based on your requirement.
  • Proxy access logs disabled by default - In this Istio release, the default behavior for proxy access logging has changed and it is now turned off by default. For first time users it is very helpful to observe access logs as the traffic flows through their services in the mesh. Additionally, if you’re upgrading to a new version and find that your logs are missing, it might break debugging capabilities that you have built around it. Because of this, the Aspen Mesh distribution has the proxy access logs turned on by default. You can customize this setting by updating the Helm option “global.proxy.accessLogFile” to “/dev/stdout”.
  • Every Sidecar resource requires “istio-system” - If you’re configuring the newly available Sidecar resource, be sure to include “istio-system” as one of the allowed egress hosts. During our testing we found that in the absence of “istio-system” namespace, the sidecar proxies will start experiencing failures communicating to the Istio control plane which can lead to cascading failures. We are working with the community to address this issue so that users can configure this resource with minimal surprises.
  • Mixer policy checks disabled by default -  Mixer policy checks were turned on by default in earlier Istio releases which meant that the sidecar proxies and gateways would always consult Mixer in the Istio control plane to check policy and forward the request to the application only if the policy allowed it. This feature was seldom used but added latency due to the out-of-process network call. This new release turned off policy checks by default after much deliberation and debate in the community. What this means is if you had previously configured Policy checks and were relying on Mixer to enforce it, after the upgrade those configurations will no longer have any effect. If you would like to enable them by default, set the Helm option “global.disablePolicyChecks” to false.

We hope this blog has made it easy to understand the scope and impact of the 1.1 release. At Aspen Mesh, we keep a close tab on the community and actively participate to make the adoption and upgrade path easier for our customers. We believe that enterprises should spend less time and effort on configuring the service mesh and focus on adding business value on top.

We'll be covering subsequent topics and deep diving into how you can set up and make the most out of new 1.1 features like multi cluster. Be sure to subscribe to the Aspen Mesh blog so you don't miss out.

If you want to quickly get started with the Aspen Mesh 1.1 release grab it here or if you’re an existing customer please follow our upgrade instructions mentioned in the documentation.


Expanding Service Mesh Without Envoy

Istio uses the Envoy sidecar proxy to handle traffic within the service mesh.  The following article describes how to use an external proxy, F5 BIG-IP, to integrate with an Istio service mesh without having to use Envoy for the external proxy.  This can provide a method to extend the service mesh to services where it is not possible to deploy an Envoy proxy.

This method could be used to secure a legacy database to only allow authorized connections from a legacy app that is running in Istio, but not allow any other applications to connect.

Securing Legacy Protocols

A common problem that customers face when deploying a service mesh is how to restrict access to an external service to a limited set of services in the mesh.  When all services can run on any nodes it is not possible to restrict access by IP address (“good container” comes from the same IP as “malicious container”).

One method of securing the connection is to isolate an egress gateway to a dedicated node and restrict traffic to the database from those nodes.  This is described in Istio’s documentation:

Istio cannot securely enforce that all egress traffic actually flows through the egress gateways. Istio only enables such flow through its sidecar proxies. If attackers bypass the sidecar proxy, they could directly access external services without traversing the egress gateway. Thus, the attackers escape Istio’s control and monitoring. The cluster administrator or the cloud provider must ensure that no traffic leaves the mesh bypassing the egress gateway.

   -- https://istio.io/docs/examples/advanced-gateways/egress-gateway/#additional-security-considerations (2019-03-25)

Another method would be to use mesh expansion to install Envoy onto the VM that is hosting your database. In this scenario the Envoy proxy on the database server would validate requests prior to forwarding them to the database.

The third method that we will cover will be to deploy a BIG-IP to act as an egress device that is external to the service mesh.  This is a hybrid of mesh expansion and multicluster mesh.

Mesh Expansion Without Envoy

Under the covers Envoy is using mutual TLS to secure communication between proxies.  To participate in the mesh, the proxy must use certificates that are trusted by Istio; this is how VM mesh expansion and multicluster service mesh are configured with Envoy.  To use an alternate proxy we need to have the ability to use certificates that are trusted by Istio.

Example of Extending Without Envoy

A proof-of-concept of extending the mesh can be taken with the following example.  We will create an “echo” service that is TCP based that will live outside of the service mesh.  The goal will be to restrict access to only allow authorized “good containers” to connect to the “echo” service via the BIG-IP.  The steps involved.

  1. Retrieve/Create certificates trusted by Istio
  2. Configure external proxy (BIG-IP) to use trusted certificates and only trust Istio certificates
  3. Add policy to external proxy to only allow “good containers” to connect
  4. Register BIG-IP device as a member of the Istio service mesh
  5. Verify that “good container” can connect to “echo” and “bad container” cannot

First we install a set of certificates on the BIG-IP that Envoy will trust and configure the BIG-IP to only allow connections from Istio.  The certs could either be pulled directly from Kubernetes (similar to setting up mesh expansion) or generated by a common CA that is trusted by Istio (similar to multicluster service mesh).

Once the certs are retrieved/generated we install them onto the proxy, BIG-IP, and configure the device to only trust client side certificates that are generated by Istio.

To enable a policy to validate the identity of the “good container” we will inspect the X509 Subject Alternative Name fields of the client certificate to inspect the spiffe name that contains the identity of the container.

Once the external proxy is configured we can register the device using “istioctl register” (similar to mesh expansion).

To verify that our test scenario is working we will have two namespaces “default” and “trusted”.  Connections from “trusted” will be allowed and “default” will be reject.  From each namespace we create a pod and run the command “nc bigip.default.svc.cluster.local 9000”.  Looking at our BIG-IP logs we can verify that our policy (iRule) worked:

Mar 25 18:56:39 ip-10-1-1-7 info tmm5[17954]: Rule /Common/log_cert <CLIENTSSL_CLIENTCERT>: allowing: spiffe://cluster.local/ns/trusted/sa/sleep
Mar 25 18:57:00 ip-10-1-1-7 info tmm2[17954]: Rule /Common/log_cert <CLIENTSSL_CLIENTCERT>: rejecting spiffe://cluster.local/ns/default/sa/default

Connection from our “good container”

/ # nc bigip.default.svc.cluster.local
9000
hi
HI

Connection from our “bad container”

# nc bigip.default.svc.cluster.local 9000

In the case of the “bad container” we are unable to connect.  The “nc”, netcat, command is simulating a very basic TCP client.  A more realistic example would be connecting to an external database that contains sensitive data.  In the “good” example we are echo’ing back the capitalized input (“hi” becomes “HI”).

Just One Example

In this article we looked at expanding a service mesh without Envoy.  This was focused on egress TCP traffic, but it could be expanded to:

  • Using BIG-IP as an SNI proxy instead of NGINX
  • Securing inbound traffic using mTLS and/or JWT tokens
  • Using BIG-IP as an ingress gateway
  • Using ServiceEntry/DestinationRules instead of registered service

If you want to see the process in action, check out this short video walkthrough.

https://youtu.be/83GdmwTvWLI

Let me know in the comments whether you’re interested in any of these use-cases or come-up with your own.  Thank you!


Enterprise Service Mesh

From Middleware to Containers: Infrastructure is Finally Cool

As someone fresh out of school just starting my software engineering career, I want to solve interesting problems. Who doesn’t? A computer science degree gave me the opportunity see a spectrum of different engineering opportunities, which led me to decide that working on infrastructure would be the most impactful area, and with the rise of cloud native technologies, actually a compelling space to work in. There is a difference between developing new functionality and developing to solve existing problems. More often than not, the solutions that address existing challenges in an industry are the ones the are used the most and last the longest. This is what excites me about working on infrastructure, the ability to build something that millions of people will rely on to run their applications. On the surface it doesn’t appear to be the most exciting work, but you can be sure that your time and effort is being put to good use.

You want to see your contributions make an impact somehow, whether that’s writing webapps, iPhone applications, business tools, etc. - the things that people actually use day-to-day. Infrastructure may not be as visible or as tangible as these kinds of technologies, but it’s gratifying to know that it’s the underlying piece that makes it all work. As much as I want to be able to say that I contribute to something that all of my non-tech friends can easily understand (like the front-end of Netflix), I think it’s even more interesting to make them think about the things that happen behind the scenes. We all expect our favorite apps, websites, etc. to be able to respond quickly to our requests no matter how many people are using them at the same time, but on the backend this is not something that is easy to handle and properly test for. What about security? We also expect that when we are trusting software with our information that it isn’t being easily intercepted or leaked along the way. Scalability and security are just two of many kinds of problems that software infrastructure incorporates, and in the end we are relying on them to actually make the front-end software usable. The advantage these days is that infrastructure software has become an incredibly interesting space to be in. Tools like Docker, Kubernetes and Istio are fascinating technologies with vibrant communities around them.

One of the cool, heavily used Kubernetes-related projects that I’m a fan of is Envoy. I can’t help but think about how some version of Envoy is being used every time I order a Lyft to make sure I actually get a ride. Infrastructure doesn’t seem as intriguing at first because as important it is, it’s running in the background and easily forgotten. Everyone needs it, but in the end, who wants to build it? The answer to that question is definitely changing as the infrastructure landscape evolves. Kubernetes, the OS of the cloud, has become a project that everyone wants a hand in. You don’t hear about people itching to make contributions to the Linux kernel, but you hear about Kubernetes and containers everywhere.

Coming up with solutions to solve the problems that we’re running into today has become more attractive to junior developers especially. We’re watching as more and more people are using technology every day, and like I mentioned before, we want our contributions to be impactful. How are we going to handle all of this traffic in a smooth and scalable way? Enter: distributed systems. Microservices are critical to constructing applications that can handle huge transaction volumes at scale. Enterprise applications run by companies like Lyft, Twitter and Google would fall apart with even normal rates of traffic without their distributed architectures. Working on these infrastructural pieces is challenging, and provides the impact that we, junior developers, are looking for.

Another thing that makes this work enticing to junior developers is that it involves an open source community. The way that the tech community has decided to solve some of these bigger, infrastructure-related problems has largely been through open source, which is both intimidating and inviting to those who are new to the tech industry. There is an open group of people talking about the technology and a community willing to help, but at the same time it’s daunting to contribute to these bigger open source projects when you’re just starting out. I will say, however, that the benefits of being able to leverage so many technologies and the community support make it a lot of fun to be a part of.

To recap, here are some of my favorite things about working on infrastructure:

  • We can solve some really hard problems with good infrastructure!
  • If it’s done right, you can build something that can be easily customized to solve problems of various sizes and for all kinds of use cases.
  • All of the cool things and services we consume daily rely on it. Talk about actually seeing your hard work being put to good use!
  • Whether you’re doing proprietary work or not, you are being introduced to open source and the community that comes with it.

I’ll admit, developing infrastructure, despite all of the interesting bits, is still not the most glamorous work. It’s the underlying technology that most people take for granted in their everyday use of technology, and is often less shiny than a beautifully designed UI and other components that sit on top of it. But once you dig in, it’s exciting to see what an impact you can make with it and cloud-native technologies and communities make it a fun space to work in. What I will say though is that it’s a great way to start out your career in tech, and it’s a fun, challenging, and very rewarding place to be.