photo of vault door

How to Approach Zero-Trust Security with a Service Mesh

Last year was challenging for data security. In the first nine months alone, there were 5,183 breaches reported with 7.9 billion records exposed. Compared to mid-year 2018, the total number of breaches was up 33.3 percent and the total number of records exposed more than doubled, up 112 percent.

Zero Trust Security 2019

What does this tell us? That, despite significant technology investments and advancements, security is still hard. A single phishing email, missed patch, or misconfiguration can let the bad guys in to wreak havoc or steal data. For companies moving to the cloud and the cloud-native architecture of microservices and containerized applications, it’s even harder. Now, in addition to the perimeter and the network itself, there’s a new network infrastructure to protect: the myriad connections between microservice containers.

With microservices, the surface area available for attack has increased exponentially, putting data at greater risk. Moreover, network-related problems like access control, load balancing, and monitoring that had to be solved once for a monolith application now must be handled separately for each service within a cluster.

Zero-Trust Security and Service Mesh

Security is the most critical part of your application to implement correctly. A service mesh allows you to handle security in a more efficient way by combining security and operations capabilities into a transparent infrastructure layer that sits between the containerized application and the network. Emerging today to address security in this environment is the convergence of the Zero-Trust approach to network security and service mesh technology.

Here are some examples of attacks that a service mesh can help mitigate:

  • Service impersonation
    • A bad actor gains access to the private network for your applications, pretends to be an authorized service, and starts making requests for sensitive data.
  • Unauthorized access
    • A legitimate service makes requests for sensitive data that it is not authorized to obtain.
  • Packet sniffing
    • A bad actor gains access to your applications private network and captures sensitive data from legitimate requests going over the network.
  • Data exfiltration
    • A bad actor sends sensitive data out of the protected network to a destination of their choosing.

So how can the tenets of Zero-Trust security and how a service mesh enable Zero Trust in the microservices environment? And how can Zero-Trust capabilities help organizations address and demonstrate compliance with stringent industry regulations?

Threats and Securing Microservices

Moat and Castle ApproachTraditionally, network security has been based on having a strong perimeter to help thwart attackers, commonly known as the moat-and-castle approach. With a secure perimeter constructed of firewalls, you trust the internal network by default, and by extension, anyone who’s there already. Unfortunately, this was never a reliably effective strategy. But more importantly, this approach is becoming even less effective in a world where employees expect access to applications and data from anywhere in the world, on any device. In fact, other types of threats -- such as insider threats -- have generally been considered by most security professionals to be among the highest threats to data protected by companies, leading to more development around new ways to address these challenges.

In 2010, Forrester Research coined the term “Zero Trust” and overturned the perimeter-based security model with a new principle: “never trust, always verify.” That means no individual or machine is trusted by default from inside or outside the network. Another Zero-Trust precept: “assume you’ve been compromised but may not yet be aware of it.” With the time to identify and contain a breach running at 279 days in 2019, that’s not an unsafe assumption.

Starting in 2013, Google began its transition to implementing Zero Trust into its networking infrastructure with much success and has made the results of their efforts open to the public in BeyondCorp. Fast forward to 2019 and the plans to adopt this new paradigm have spread across industries like wildfire, largely in response to massive data breaches and stricter regulatory requirements.

While there are myriad Zero-Trust networking solutions available for protecting the perimeter and the operation of corporate networks, there are many new miles of connections within the microservices environment that also need protection. A service mesh provides critical security capabilities such as observability to aid in optimizing MTTD and MTTR, as well as ways to implement and manage encryption, authentication, authorization, policy control and configuration in Kubernetes clusters.

Security Within the Kubernetes Cluster

While there are myriad Zero-Trust networking solutions available for protecting
the perimeter and the operation of corporate networks, there are many new miles of connections within the microservices environment that also need protection. A service mesh provides critical security capabilities such as observability to aid in optimizing MTTD and MTTR, as well as ways to implement and manage encryption, authentication, authorization, policy control and configuration in Kubernetes clusters.

Here are a few ways to approach enhancing your security with a service mesh:

  • Simplify microservices security with incremental mTLS
  • Manage identity, certificates and authorization
  • Access control and enforcing the level of least privilege
  • Monitoring, alerting and observability

A service mesh also adds controls over traffic ingress and egress at the perimeter. Allowed user behavior is addressed with with role-based access control (RBAC). With these controls, the Zero-Trust philosophy of “trust no one, authenticate everyone” stays in force by providing enforceable least privilege access to services in the mesh.

Aspen Mesh can help you to achieve a Zero-Trust security posture by applying these concepts and features. As an enterprise- and production-ready service mesh that extends the capabilities of Istio to address enterprise security and compliance needs, we also provide an intuitive hosted user interface and dashboard that make it easier to deploy, monitor, and configure these features.

Learn More About Zero-Trust Security and Service Mesh

Interested in learning more about how service mesh can help you achieve Zero-Trust security? Get the free white paper by completing the form below.

Aspen Mesh - Service Mesh Security and Complinace

Aspen Mesh 1.4.4 & 1.3.8 Security Update

Aspen Mesh is announcing the release of 1.4.4 and 1.3.8 (based on upstream Istio 1.4.4 & 1.3.8), both addressing a critical security update. All our customers are strongly encouraged to upgrade to these versions immediately based on your currently deployed Aspen Mesh version.

The Aspen Mesh team reported this CVE to the Istio community as per Istio CVE reporting guidelines, and our team was able to further contribute to the community--and for all Istio users--by fixing this issue in upstream Istio. As this CVE is extremely easy to exploit and the risk score was deemed very high (9.0), we wanted to respond  with urgency by getting a patch in upstream Istio and out to our customers as quickly as possible. This is just one of the many ways we are able to provide value to our customers as a trusted partner by ensuring that the pieces from Istio (and Aspen Mesh) are secure and stable.

Below are details about the CVE-2020-8595, steps to verify whether you’re currently vulnerable, and how to upgrade to the patched releases.

CVE Description

A bug in Istio's Authentication Policy exact path matching logic allows unauthorized access to resources without a valid JWT token. This bug affects all versions of Istio (and Aspen Mesh) that support JWT Authentication Policy with path based trigger rules (all 1.3 & 1.4 releases). The logic for the exact path match in the Istio JWT filter includes query strings or fragments instead of stripping them off before matching. This means attackers can bypass the JWT validation by appending “?” or “#” characters after the protected paths.

Example Vulnerable Istio Configuration

Configure JWT Authentication Policy triggers on an exact HTTP path match "/productpage" like this. In this example, the Authentication Policy is applied at the ingress gateway service so that any requests with the exact path matching “/productpage” requires a valid JWT token. In the absence of a valid JWT token, the request is denied and never forwarded to the productpage service. However, due to this CVE, any request made to the ingress gateway with path "/productpage?" without any valid JWT token is not denied but sent along to the productpage service, thereby allowing access to a protected resource without a valid JWT token. 


Since this vulnerability is in the Envoy filter added by Istio, you can also check the proxy image you’re using in the cluster locally to see if you’re currently vulnerable. Download and run this script to verify if the proxy image you’re using is vulnerable. Thanks to Francois Pesce from the Google Istio team in helping us create this test script.

Upgrading to This Version

Please follow upgrade instructions in our documentation to upgrade to these versions. Since this vulnerability affects the sidecar proxies and gateways (ingress and egress), it is important to follow the post-upgrade tasks here and rolling upgrade all of your sidecar proxies and gateways.

Aspen Mesh Policy Framework - After

Announcing Aspen Mesh Secure Ingress Policy

In the new Application Economy, developers are building and deploying applications at a far greater frequency than ever before. Organizations gain this agility by shifting to microservices architectures, powered by Kubernetes and continuous integration and delivery (CI/CD). For businesses to derive value from these applications, they need to be exposed to the outside world in a secure way so that their customers can access them--and have a great user experience. That’s such an obvious statement, you’re probably wondering why I even bothered saying it.

Well, within most organizations, securely exposing an application to the outside world is complicated. Ports, protocols, paths, and auth requirements need to be collected. Traffic routing resources and authentication policies need to be configured. DNS entries and TLS certificates need to be created and mounted. Application teams know some of these things and platform owners know others. Pulling all of this together is painful and time consuming.

This problem is exacerbated by the lack of APIs mapping intent to the persona performing the task. Let’s take a quick look at the current landscape.

Kubernetes allows orchestration (deploying and upgrading) of applications but doesn't provide any capabilities to capture application behavior like protocol, paths and security requirements for these APIs. For securely exposing an application to their users, platform operators currently need to capture this information from developers in private conversations and create additional Kubernetes resources like Ingress which in turn creates the plumbing to allow traffic from outside the cluster and route it to the appropriate backend application. Alternatively, advanced routing capabilities in Istio make it possible to control more aspects of traffic management, and at the same time allows developers to offload functionality like JWT Authentication from their applications to the infrastructure.

But the missing piece in both of these scenarios is a reliable and scalable way of gathering information about the applications from the developers independent of platform operators and enabling platform operators to securely configure the resources they need to expose these applications.

Additionally, configuring the Ingress or Istio routing APIs, is only a part of the puzzle. Operators also need to set up domain names (DNS) and get domain certificates (static or dynamic via Let’s Encrypt for example) in order to secure the traffic getting into their clusters. All of this requires managing a lot of moving pieces with the possibility of failures in multiple steps on the way.

Aspen Mesh Policy Framework - Before


To solve these challenges, we are excited to announce the release of Aspen Mesh Secure Ingress Policy.

A New Way to Securely Expose Your Applications

Our goal for developing this new Secure Ingress Policy framework is to help streamline communication between application developers and platform operators. With this new feature, both principals can be productive, but also work together.

The way it works is application developers provide a specification for their service which they can store in their code management systems and communicate to Aspen Mesh through an Application API. This spec includes the service port and protocols to expose via Ingress and the API paths and authentication requirements (e.g., JWT validation).

Platform operators provide a specification that defines the security and networking aspects of the platform and communicates it to Aspen Mesh via a Secure Ingress API. This spec includes certificate secrets, domain name, and JWKS server and issuer.

Aspen Mesh Policy Framework - After


Aspen Mesh takes these inputs and creates all of the necessary system resources like Istio Gateways, VirtualServices, Authentication Policies, configuring DNS entries and retrieving certificates to enable secure access to the application. If you have configured these by hand, you know the complexity involved in getting this right. With this new feature, we want our customers to focus on what’s important to them and let Aspen Mesh take care of your infrastructure needs. Additionally, the Aspen Mesh controllers always keep the resources in sync and augment them as the Secure Ingress and Application resources are updated.

Another important benefit of mapping APIs to personas is the ability to create ownership and storing configuration in the right place.  Keep the application-centric specifications in code right next to the application so that you can review and update both as part of your normal code review process. You don’t need another process or workflow to apply configuration changes out-of-band with your code changes. Also because these things live together, they can naturally be deployed at the same time, thereby reducing misconfigurations.

The overarching goals for our APIs were to enable platform operators to retain the strategic point of control to enforce policies while allowing application developers to move quickly and deliver customer-facing features. And most importantly, allowing our customer’s customers to use the application reliably and securely. 

Today, we're natively integrated with AWS Route 53 and expect to offer integrations with Azure and GCP in the near future. We also retrieve and renew domain certificates from Let’s Encrypt, and the only thing that operators need to provide is their registered email address and the rest is handled by the Aspen Mesh control plane. 

Interested in learning more about Secure Ingress Policy? Reach out to one of our experts to learn more about how you can implement Aspen Mesh’s Secure Ingress Policies at your organization.

Secure Ingress + TLS Termination - Aspen Mesh

Secure Ingress + TLS Termination: The Match You Didn't Know You Needed

Keeping Data Secure

Secure Ingress + TLS TerminationWorking in an increasingly connected world, maintaining the security of users and their data is a challenging problem to solve. As we move into a state where service meshes are becoming a better way to provide availability of services to your users, it’s also more important to ensure that the data is secured from the moment it leaves the user’s device until it is ingressed into your service mesh. In an interconnected world increasingly focused on the security of both users and their data, secure ingress is a fundamentally complex but necessary part of the service mesh architecture.

Today, I’d like to talk about secure ingress, TLS termination and how this all affects users and maintainers of an Istio service mesh. We’ll also touch on some of the pitfalls of setting up TLS termination as well as how Aspen Mesh attempts to simplify the entire process, so your teams are free to focus on providing value to your customers.


Security and You: What is Secure Ingress?

So you’ve got your services up and running, sidecars injected, Prometheus churning out stats and Istio working properly in your cluster; you’re ready to start processing user data and adding value. But your security team is leery about that TCP ingress you’ve been using for testing. Gateways and Virtual Services are great, and while passing TLS requirements down to the service has been working, they want a stronger assertion. How can you make sure that all datawhether or not it is secured within the service meshis encrypted with no setup from the services themselves?

Enter secure ingress. By configuring TLS requirements on your Istio Gateway, you can make sure that all information is encrypted, even without TLS on your services. Istio supports TLS certificates in both traditional file mount setups as well as in Kubernetes secrets


Configuration Issues (Otherwise Known As: ANOTHER 503 UF UR?!)

You’ve decided that TLS termination is the way to go, sounds amazing! No more needing to worry about services handling TLS requirements, and we can even JWT secure our endpoints! You deploy the gateway, virtual service and authorization policy and once you’ve got your TLS certs deployed, you hit that endpoint and get… A 503 status? Looking at the Istio ingress gateway logs only tells you that there was an upstream connection failure (UF) and the upstream connection reset (UR). What’s going on?

Welcome to layer seven TCP routing and mTLS requirements. By deploying an authorization policy to JWT to secure your ingress endpoints, you may have inadvertently disabled mTLS, causing your sidecars to balk at communicating. Maybe you haven’t even been fully using sidecars for some of the services up until this point! Each of these errors show up as an obtuse 503 status code and require digging into the istioctl proxy config just to get an understanding on what’s going on in the backend. 

Even more frustrating, the configuration of your service may have been fine without TLS termination. Ingressing through a TCP port to a TCP port on your service for HTTP traffic is fine. But now that you’re using HTTPS, Istio wants to know about what type of traffic you’ll actually be sending. You’ll have to prefix your port names on your service with “http-” in order to tell Istio what you actually are sending over to that service. But Istio’s errors aren’t going to tell you about that.


Certificates? DNS?

Let’s say you’ve addressed the issues above. You’ve finally gotten your services to connect from the outside world, your cluster is up and running, and everything finally seems to be working as you expected. But hold on, your ELB just restarted, and now it has a new domain name and a new IP address. Suddenly, traffic is no longer ingressing. Your old DNS records aren’t pointing to the correct host name, and since you’re using TLS now, you can’t just point your customers at your new host name and call it good. 

External DNS management will need to be configured for your cluster to make sure this does not happen, updating the records as your DNS and IP addresses shift. And if your certificates are not renewed, will you be ready with CertManager set up properly in your cluster?


Secure Ingress with Aspen Mesh

As described above, setting up secure ingress into an Istio cluster is not as simple as it looks. From configuration issues ranging from mTLS configuration and service port naming, to ever changing environments with DNS and certificate renewals, managing a TLS ingress into your cluster can be a daunting process, especially for new Istio users. The good news? Aspen Mesh can help simplify your secure ingress needs. By simply specifying applications and ports that you’d like to open to the world, and the ports you’d like to ingress on, Aspen Mesh’s Secure Ingress will take care of the configuration for you by: 

  • Setting up and maintaining gateways, virtual services, and authorization policies
  • Providing you with detailed information about possible misconfigurations in your secure ingress

Now doesn’t that seem like a lot of work you’d rather have your system do for you? Reach out to our team of experts if you’d like to learn more about how we can help.

How A Service Mesh Can Make Application Delivery More Secure

How A Service Mesh Can Make Application Delivery More Secure

What is the biggest business advantage that today’s companies have? Their level of agility. 

A business’s agility is what allows them to rapidly grow their revenue streams, respond to customer needs and defend against disruption. It is the need for agility that drives digital transformations and pushes companies to define new ways of working, develop new application architectures and embrace cloud and container technologies.

But agility alone won’t get a business where they need to be; agility with stability is the critical competitive advantage. Companies that can move faster and rapidly meet evolving customer needs — while staying out of the news for downtime and security breaches — will be the winners of tomorrow.

Service meshes help organizations achieve agility with stability by increasing the visibility and observability of their microservices, allowing them to gain control over a complex solution and to enforce their applications’ security and compliance requirements. As companies continue to adopt cloud native technologies, they must not lose sight of ensuring that the applications they deliver are secure and compliant and a service mesh provides many components in its tool box that allows them to do that.

Let the Experts Be Experts

In order to ensure that applications are secure, organizations need security and compliance experts. And, those experts need to be leveraged to create business-wide policies that protect customer and company data. However, all too often in the DevOps world, the implementation and application of those policies is left to application teams that are already implementing the individual microservices that make up the larger application. The individual teams do not have the expertise or context to understand the larger security needs of the business, or worse, they may see security requirements as an impediment to delivering their code to production on schedule.

Service mesh can let experts be experts by allowing them to create security and authorization policies that can be applied as a transparent layer under the application services regardless of the application developer’s decisions. By creating this security layer, the burden of implementation becomes aligned with the people who have the most interest in its success. The friction is also removed from the people who are least invested. This allows the business to be confident that their applications are as compliant — and their data is as secure — as their risk profile requires.

Encryption and Identity for Zero Trust

Data needs to be protected at all times, not just while it is at rest in a database somewhere. This includes ensuring that data is encrypted while moving between microservices, regardless of whether that data hits the wire on the network. Protecting that data means that you know:

  1. Who has access to the data
  2. That you trust them
  3. That they are sending and receiving the data securely

Because a service mesh is a transparent infrastructure layer that sits between the network and the microservices, on that network is the perfect place to ensure data encryption, identity, trust and permission.

By deploying a service mesh, organizations can ensure a secure by default posture in a zero-trust environment without changing existing applications or burdening application developers with complex authentication schemes, certificate management or permission revocation and additions. By delegating those functionalities to the mesh, organizations can easily deploy a more secure and compliant application environment with greater efficiency, less overhead and more confidence in their security posture.

Find and Fix with a Service Mesh

Mistakes will happen and security policies will have holes in them. Organizations shouldn't expect people and the policies they create to be perfect, but they must expect that they find and fix those mistakes before others do and exploit them. Some of this can be done with tools and libraries that run inside of the application’s code or container, or with firewalls and other products that run in the physical network. But these techniques miss one key element: what is going on as the service’s requests are coming in and out of the application while those requests are inside of the cluster and its hosts.

A service mesh, especially Istio-based sidecar meshes like Aspen Mesh, provides organizations with a unique view into every microservice’s request/response behavior. Along with this additional visibility, you can understand the behavior of a service’s traffic before and after it leaves the application’s code and container to form a request trace from source to destination and back. Not only does this allow you to find anomalous requests, unknown traffic sources and destinations, it allows you and stop them from accessing services that they should not have access to through security and policy changes. Even more importantly, these policy changes can happen without directly impacting or changing the application, thus reducing the amount of time it takes to close security holes while lessening the overall risk of exploits.

As organizations continue to embrace cloud and container technologies — and their use of those technologies matures and scales — a service mesh will become a vital part of their security and compliance strategy.

Learn More About Securing Containerized Applications

Interested in learning more about service mesh and security? Fill out the form below to get the white paper on how a service mesh can help you adopt a Zero-Trust security posture for your containerized applications.