Tracing gRPC with Istio

At Aspen Mesh we love gRPC. Most of our public facing and many internal APIs use it. To give you a brief background in case you haven’t heard about it (would be really difficult with gRPC’s belle of the ball status), it is a new, highly efficient and optimized Remote Procedure Call (RPC) framework. It is based on the battle tested protocol buffers serialization format and HTTP/2 network protocol.

Using HTTP/2 protocol, gRPC applications can benefit from multiplexing requests, efficient connection utilization and host of other enhancements over other protocols like HTTP/1.1 which is very well documented here. Additionally, protocol buffers are an easy and extensible way for serializing structured data in binary format which in itself gives you significant performance improvements over text based formats. Combining these two results in a low latency and highly scalable RPC framework, which is in essence what gRPC is. Additionally, the growing ecosystem gives you the ability to write your applications in many supported languages like (C++, Java, Go, etc.) and an extensive set of third party libraries to use.

Apart from the benefits I listed above, what I like most about gRPC is the simplicity and intuitiveness with which you can specify your RPCs (using the protobufs IDL) and how a client application can invoke methods on the server application as if it was a local function call. A lot of the code (service descriptions and handlers, client methods, etc.) gets auto generated for you making it very convenient to use.

Now that I have laid out some background, let’s turn our attention to the main topic of this blog. Here I’m going to cover how to add tracing in your applications built on gRPC, especially if you’re using Istio or Aspen Mesh.

Tracing is great for debugging and understanding your application’s behavior. The key to making sense of all the tracing data is being able to correlate spans from different microservices which are related to a single client request.

To achieve this, all microservices in your application should propagate tracing headers. If you’re using a service mesh like Istio or Aspen Mesh, the ingress and sidecar proxies automatically add the appropriate tracing headers and report the spans to the tracing collector backend like Jaeger or Zipkin. The only thing left for applications to do is propagate tracing headers from incoming requests (which sidecar or ingress proxy adds) to any outgoing requests it makes to other microservices.

Propagating Headers from gRPC to gRPC Requests

The easiest way to do tracing header propagation is to use the grpc opentracing middleware library’s client interceptors. This can be used if your application is making a new outbound request upon receiving the incoming request. Here’s the sample code to correctly propagate tracing headers from the incoming to outgoing request:

  import (
    "golang.org/x/net/context"
    "github.com/grpc-ecosystem/go-grpc-middleware/tracing/opentracing"
    "ot "github.com/opentracing/opentracing-go"
  )

  // ctx is the incoming gRPC request's context
  // addr is the address for the new outbound request
  func createGRPCConn(ctx context.Context, addr string) (*grpc.ClientConn, error) {
  	var opts []grpc.DialOption
  	opts = append(opts, grpc.WithStreamInterceptor(
  		grpc_opentracing.StreamClientInterceptor(
  			grpc_opentracing.WithTracer(ot.GlobalTracer()))))
  	opts = append(opts, grpc.WithUnaryInterceptor(
  		grpc_opentracing.UnaryClientInterceptor(
  			grpc_opentracing.WithTracer(ot.GlobalTracer()))))
  	conn, err := grpc.DialContext(ctx, addr, opts...)
  	if err != nil {
  		glog.Error("Failed to connect to application addr: ", err)
  		return nil, err
  	}
  	return conn, nil
  }

Pretty simple right?

Adding the opentracing client interceptors ensures that making any new unary or streaming gRPC request on the client connection injects the correct tracing headers. If the passed context has the tracing headers present (which should be the case if you are using Aspen Mesh or Istio and passing the incoming request’s context), then the new span is created as the child span of the span present in the passed context. On the other hand if the context has no tracing information, a new root span is created for the outbound request.

Propagating Headers from gRPC to HTTP Requests

Now let’s look at the scenario if your application makes a new outbound HTTP/1.1 request upon receiving a new incoming gRPC request. Here’s the sample code to accomplish header propagating in this case:

  import (
    "net/http"
    "golang.org/x/net/context"
    "golang.org/x/net/context/ctxhttp"
    "ot "github.com/opentracing/opentracing-go"
  )

  // ctx is the incoming gRPC request's context
  // addr is the address of the application being requested
  func makeNewRequest(ctx context.Context, addr string) {
    if span := ot.SpanFromContext(ctx); span != nil {
      req, _ := http.NewRequest("GET", addr, nil)

      ot.GlobalTracer().Inject(
        span.Context(),
        ot.HTTPHeaders,
        ot.HTTPHeadersCarrier(req.Header))

      resp, err := ctxhttp.Do(ctx, nil, req)
      // Do something with resp
    }
  }

This is quite standard for serializing tracing headers from incoming request’s (HTTP or gRPC) context.

Great! So far we have been able to use libraries or standard utility code to get what we want.

Propagating Headers When Using gRPC-Gateway

One of the libraries commonly used in gRPC applications is the grpc-gateway library to expose services as RESTful JSON APIs. This is very useful when you want to consume gRPC from clients like curl, web browser, etc. which don’t understand it or maintain a RESTful architecture. More details on how to expose RESTful APIs using grpc-gateway can be found in this great blog. I highly encourage you to read it if you’re new to this architecture.

When you start using grpc-gateway and want to propagate tracing headers there are few very interesting interactions that are worth mentioning. The grpc-gateway documentation states that all IANA permanent HTTP headers are prefixed with grpcgateway- and added as request headers. This is great but as tracing headers like x-b3-traceidx-b3-spanid, etc. are not IANA recognized permanent HTTP headers they are not copied over to gRPC requests when grpc-gateway proxies HTTP requests. This means as soon as you add the grpc-gateway to your application, the header propagation logic will stop working.

Isn’t that typical? You add one awesome thing which breaks the current working setup. No worries, I have a solution for you!

Here’s a way to ensure you don’t lose the tracing information when proxying between HTTP and gRPC using grpc-gateway:

  import (
    "net/http"
    "golang.org/x/net/context"
    "google.golang.org/grpc/metadata"
    "github.com/grpc-ecosystem/grpc-gateway/runtime"
  )

  const (
  	prefixTracerState  = "x-b3-"
  	zipkinTraceID      = prefixTracerState + "traceid"
  	zipkinSpanID       = prefixTracerState + "spanid"
  	zipkinParentSpanID = prefixTracerState + "parentspanid"
  	zipkinSampled      = prefixTracerState + "sampled"
  	zipkinFlags        = prefixTracerState + "flags"
  )

  var otHeaders = []string{
  	zipkinTraceID,
  	zipkinSpanID,
  	zipkinParentSpanID,
  	zipkinSampled,
  	zipkinFlags}

  func injectHeadersIntoMetadata(ctx context.Context, req *http.Request) metadata.MD {
  	pairs := []string{}
  	for _, h := range otHeaders {
  		if v := req.Header.Get(h); len(v) > 0 {
  			pairs = append(pairs, h, v)
  		}
  	}
  	return metadata.Pairs(pairs...)
  }

  type annotator func(context.Context, *http.Request) metadata.MD

  func chainGrpcAnnotators(annotators ...annotator) annotator {
  	return func(c context.Context, r *http.Request) metadata.MD {
  		mds := []metadata.MD{}
  		for _, a := range annotators {
  			mds = append(mds, a(c, r))
  		}
  		return metadata.Join(mds...)
  	}
  }

  // Main function of your application. Insert tracing headers into gRPC
  // metadata using annotators
  func run() {
    ...
	  annotators := []annotator{injectHeadersIntoMetadata}

	  gwmux := runtime.NewServeMux(
		  runtime.WithMetadata(chainGrpcAnnotators(annotators...)),
	  )
    ...
  }

In the code above, I have used the runtime.WithMetadata API provided by the grpc-gateway library. This API is useful for reading attributes from HTTP request and adding it to the metadata, which is exactly what we want! A little bit more work, but still using the APIs exposed by the library.

The injectHeadersIntoMetadata annotator looks for the tracing headers in the HTTP requests and appends it to the metadata, thereby ensuring that the tracing headers can be further propagated from gRPC to outbound requests using the techniques mentioned in the previous sections.

Another interesting thing you might have observed is the wrapper chainGrpcAnnotators function. The runtime.WithMetadata API only allows a single annotator to be added which might not be enough for all scenarios. In our case, we had a tracing annotator (like the one show above) and an authentication annotator which appended auth data from HTTP request to the gRPC metadata. UsingchainGrpcAnnotators allows you to add multiple annotators and the wrapper function joins the metadata from various annotators into a single metadata for the request.


The Road Ahead for Service Mesh

This is the third in a blog series covering how we got to a service meshwhy we decided on the type of mesh we did and where we see the future of the space.

If you’re struggling to manage microservices as architectures continue to become more complex, there’s a good chance you’ve at least heard of service mesh. For the purposes of this blog, I’ll assume you’re familiar with the basic tenets of a service mesh.

We believe that service mesh is advancing microservice communication to a new level that is unachievable with the one-off solutions that were previously being used. Things like DNS provide some capabilities like service discovery, but don’t provide fast retries, load balancing, tracing and health monitoring. The old approach also requires that you cobble together several things each time when it’s possible to bundle it all together in a reusable tool.

While it’s possible to accomplish much of what a service mesh manages with individual tools and processes, it’s manual and time consuming. The images below provides a good idea of how a mesh simplifies the management of microservices.

 

 

Right Around the Corner

So what’s in the immediate future? I think we’ll see the technology quickly mature and add more capabilities as standard features in response to enterprises realizing the efficiency gains created by a mesh and look to implement them as the standard for managing microservice architectures. Offerings like Istio are not ready for production deployments, but the roadmap is progressing quickly and it seems we’ll be to v1 in short order. Security is a feature provided by service mesh, but for most enterprises it’s a major consideration and I see policy enforcement and monitoring options becoming more robust for enterprise production deployments. A feature I see on the near horizon and one that will provide tremendous value is an analytics platform to show insights from the huge amount of telemetry data in a service mesh. I think an emerging value proposition we’ll see is that the mesh allows you to gain and act on data that will allow you to more efficiently manage your entire architecture.

Further Down the Road

There is a lot of discussion on what’s on the immediate horizon for service mesh, but what is more interesting is considering what the long term will bring. My guess is that we ultimately come to a mesh being an embedded value add in a platform. Microservices are clearly the way of the future, so organizations are going to demand an effortless way to manage them. They’ll want something automated, running in the background that never has to be thought about. This is probably years down the road, but I do believe service mesh will eventually be a ubiquitous technology that is a fully managed plug and play config. It will be interesting to see new ways of using the technology to manage infrastructure, services and applications.

We’re excited to be part of the journey, and are inspired by the ideas in the Istio community and how users are leveraging service mesh to solve direct problems created by the explosion of microservices and also find new efficiencies with it. Our goal is to make the implementation of a mesh seamless with your existing technology and provide enhanced features, knowledge and support to take the burden out of managing microservices. We’re looking forward to the road ahead and would love to work with you to make your microservices journey easier.


Service Mesh Architectures

If you are building your software and teams around microservices, you’re looking for ways to iterate faster and scale flexibly. A service mesh can help you do that while maintaining (or enhancing) visibility and control. In this blog, I’ll talk about what’s actually in a Service Mesh and what considerations you might want to make when choosing and deploying one.

So, what is a service mesh? How is it different from what’s already in your stack? A service mesh is a communication layer that rides on top of request/response unlocking some patterns essential for healthy microservices. A few of my favorites:

  • Zero-trust security that doesn’t assume a trusted perimeter
  • Tracing that shows you how and why every microservice talked to another microservice
  • Fault injection and tolerance that lets you experimentally verify the resilience of your application
  • Advanced routing that lets you do things like A/B testing, rapid versioning and deployment and request shadowing

Why a new term?

Looking at that list, you may think “I can do all of that without a Service Mesh”, and you’re correct. The same logic applies to sliding window protocols or request framing. But once there’s an emerging standard that does what you want, it’s more efficient to rely on that layer instead of implementing it yourself. Service Mesh is that emerging layer for microservices patterns.

Service mesh is still nascent enough that codified standards have yet to emerge, but there is enough experience that some best practices are beginning to become clear. As the the bleeding-edge leaders develop their own approaches, it is often useful to compare notes and distill best practices. We’ve seen Kubernetes emerge as the standard way to run containers for production web applications. My favorite standards are emergent rather than forced: It’s definitely a fine art to be neither too early nor too late to agree on common APIs, protocols and concepts.

Think about the history of computer networking. After the innovation of best-effort packet-switched networks, we found out that many of us were creating virtual circuits over them - using handshaking, retransmission and internetworking to turn a pile of packets into an ordered stream of bytes. For the sake of interoperability and simplicity, a “best practice” stream-over-packets emerged: TCP (the Introduction of RFC675 does a good job of explaining what it layers on top of). There are alternatives - I’ve used the Licklider Transmission Protocol in space networks where distributed congestion control is neither necessary nor efficient. Your browser might already be using QUIC. Standardizing on TCP, however, freed a generation of programmers from fiddling with implementations of sliding windows, retries, and congestion collapse (well, except for those packetheads that implemented it).

Next, we found a lot of request/response protocols running on top of TCP. Many of these eventually migrated to HTTP (or sequels like HTTP/2 or gRPC). If you can factor your communication into “method, metadata, body”, you should be looking at an HTTP-like protocol to manage framing, separate metadata from body, and address head-of-line blocking. This extends beyond just browser apps - databases like Mongo provide HTTP interfaces because the ubiquity of HTTP unlocks a huge amount of tooling and developer knowledge.

You can think about service mesh as being the lexicon, API and implementation around the next tier of communication patterns for microservices.

OK, so where does that layer live? You have a couple of choices:

  • In a Library that your microservices applications import and use.
  • In a Node Agent or daemon that services all of the containers on a particular node/machine.
  • In a Sidecar container that runs alongside your application container.

Library

The library approach is the original. It is simple and straightforward. In this case, each microservice application includes library code that implements service mesh features. Libraries like Hystrix and Ribbon would be examples of this approach.

This works well for apps that are exclusively written in one language by the teams that run them (so that it’s easy to insert the libraries). The library approach also doesn’t require much cooperation from the underlying infrastructure - the container runner (like Kubernetes) doesn’t need to be aware that you’re running a Hystrix-enhanced app.

There is some work on multilanguage libraries (reimplementations of the same concepts). The challenge here is the complexity and effort involved in replicating the same behavior over and over again.

We see very limited adoption of the library model in our user base because most of our users are running applications written in many different languages (polyglot), and are also running at least a few applications that aren’t written by them so injecting libraries isn’t feasible.

This model has an advantage in work accounting: the code performing work on behalf of the microservice is actually running in that microservice. The trust boundary is also small - you only have to trust calling a library in your own process, not necessarily a remote service somewhere out over the network. That code only has as many privileges as the one microservice it is performing work on behalf of. That work is also performed in the context of the microservice, so it’s easy to fairly allocate resources like CPU time or memory for that work - the OS probably does it for you.

Node Agent

The node agent model is the next alternative. In this architecture, there’s a separate agent (often a userspace process) running on every node, servicing a heterogenous mix of workloads. For purposes of our comparison, it’s the opposite of the library model: it doesn’t care about the language of your application but it serves many different microservice tenants.

Linkerd’s recommended deployment in Kubernetes works like this. As do F5’s Application Service Proxy (ASP) and the Kubernetes default kube-proxy.

Since you need one node agent on every node, this deployment requires some cooperation from the infrastructure - this model doesn’t work without a bit of coordination. By analogy, most applications can’t just choose their own TCP stack, guess an ephemeral port number, and send or receive TCP packets directly - they delegate that to the infrastructure (operating system).

Instead of good work accounting, this model emphasizes work resource sharing - if a node agent allocates some memory to buffer data for my microservice, it might turn around and use that buffer for data for your service in a few seconds. This can be very efficient, but there’s an avenue for abuse. If my microservice asks for all the buffer space, the node agent needs to make sure it gives your microservice a shot at buffer space first. You need a bit more code to manage this for each shared resource.

Another work resource that benefits from sharing is configuration information. It’s cheaper to distribute one copy of the configuration to each node, than to distribute one copy of the configuration to each pod on each node.

A lot of functionality that containerized microservices rely on are provided by a Node Agent or something topologically equivalent. Think about kubelet initializing your pod, your favorite CNI daemon like flanneld, or stretching your brain a bit, even the operating system kernel itself as following this node agent model.

Sidecar

Sidecar is the new kid on the block. This is the model used by Istio with Envoy. Conduit also uses a sidecar approach. In Sidecar deployments, you have one adjacent container deployed for every application container. For a service mesh, the sidecar handles all the network traffic in and out of the application container.

This approach is in between the library and node agent approaches for many of the tradeoffs I discussed so far. For instance, you can deploy a sidecar service mesh without having to run a new agent on every node (so you don’t need infrastructure-wide cooperation to deploy that shared agent), but you’ll be running multiple copies of an identical sidecar. Another take on this: I can install one service mesh for a group of microservices, and you could install a different one, and (with some implementation-specific caveats) we don’t have to coordinate. This is powerful in the early days of service mesh, where you and I might share the same Kubernetes cluster but have different goals, require different feature sets, or have different tolerances for bleeding-edge vs. tried-and-true.service mesh architecture

Sidecar is advantageous for work accounting, especially in some security-related aspects. Here’s an example: suppose I’m using a service mesh to provide zero-trust style security. I want the service mesh to verify both ends (client and server) of a connection cryptographically. Let’s first consider using a node agent: When my pod wants to be the client of another server pod, the node agent is going to authenticate on behalf of my pod. The node agent is also serving other pods, so it must be careful that another pod cannot trick it into authenticating on my pod’s behalf. If we think about the sidecar case, my pod’s sidecar does not serve other pods. We can follow the principle of least privilege and give it the bare minimum it needs for the one pod it is serving in terms of authentication keys, memory and network capabilities.

So, from the outside the sidecar has the same privileges as the app it is attached to. On the other hand, the sidecar needs to intervene between the app and the outside. This creates some security tension: you want the sidecar to have as little privilege as possible, but you need to give it enough privilege to control traffic to/from the app. For example, in Istio, the init container responsible for setting up the sidecar has the NET_ADMIN permission currently (to set up the iptables rules necessary). That initialization uses good security practices - it does the minimum amount necessary and then goes away, but everything with NET_ADMIN represents attack surface. (Good news - smart people are working on enhancing thisfurther).

Once the sidecar is attached to the app, it’s very proximate from a security perspective. Not as close as a function call in your process (like library) but usually closer than calling out to a multi-tenant node agent. When using Istio in Kubernetes your app container talks to the sidecar over a loopback interface inside of the network namespace shared with your pod - so other pods and node agents generally can’t see that communication.

Most Kubernetes clusters have more than one pod per node (and therefore more than one sidecar per node). If each sidecar needs to know “the entire config” (whatever that means for your context), then you’ll need more bandwidth to distribute that config (and more memory to store copies of it). So it can be powerful to limit the scope of configuration that you have to give to each sidecar - but again there’s an opposing tension: something (in Istio’s case, Pilot) has to spend more effort computing that reduced configuration for each sidecar.

Other things that happen to be replicated across sidecars accrue a similar bill. Good news - the container runtimes will reuse things like container disk images when they’re identical and you’re using the right drivers, so the disk penalty is not especially significant in many cases, and memory like code pages can also often be shared. But each sidecar’s process-specific memory will be unique to that sidecar so it’s important to keep this under control and avoid making your sidecar “heavy weight” by doing a bunch of replicated work in each sidecar.

Service Meshes relying on sidecar provide a good balance between a full set of features, and a lightweight footprint.

Will the node agent or sidecar model prevail?

I think you’re likely to see some of both. Now seems like a perfect time for sidecar service mesh: nascent technology, fast iteration and gradual adoption. As service mesh matures and the rate-of-change decreases, we’ll see more applications of the node agent model.

Advantages of the node agent model are particularly important as service mesh implementations mature and clusters get big:

  • Less overhead (especially memory) for things that could be shared across a node
  • Easier to scale distribution of configuration information
  • A well-built node agent can efficiently shift resources from serving one application to another

Sidecar is a novel way of providing services (like a high-level communication proxy a la Service Mesh) to applications. It is especially well-adapted for containers and Kubernetes. Some of its greatest advantages include:

  • Can be gradually added to an existing cluster without central coordination
  • Work performed for an app is accounted to that app
  • App-to-sidecar communication is easier to secure than app-to-agent

What’s next?

As Shawn talked about in his post, we’ve been thinking about how microservices change the requirements from network infrastructure for a few years now. The swell of support and uptake for Istio demonstrated to us that there’s a community ready to develop and coalesce on policy specs, with a well-architected implementation to go along with it.

Istio is advancing state-of-the-art microservices communication, and we’re excited to help make that technology easy to operate, reliable, and well-suited for your team’s workflow in private cloud, public cloud or hybrid.


The Path to Service Mesh

When we talk to people about service mesh, there are a few questions we’re always asked. These questions range from straightforward questions about the history of our project, to deep technical questions on why we made certain decisions for our product and architecture.

To answer those questions we’ll bring you a three-part blog series on our Aspen Mesh journey and why we chose to build on top of Istio.

To begin, I’ll focus on one of the questions I’m most commonly asked.

Why did you decide to focus on service mesh and what was the path that lead you there?

LineRate Systems: High-Performance Software Only Load Balancing

The journey starts with a small Boulder startup called LineRate Systems and the acquisition of that company by F5 Networks in 2013. Besides being one of the smartest and most talented engineering teams I have ever had the privilege of being part of, LineRate was a lightweight high-performing software-only L7 proxy. When I say high performance, I am talking about turning a server you already had in your datacenter 5 years ago into a high performance 20+ Gbps 200,000+ HTTP requests/second fully featured proxy.

While the performance was eye-catching and certainly opened doors for our customers, our hypothesis was that customers wanted to pay for capacity, not hardware. That insight would turn out to be LineRate’s key value proposition. This simple concept would allow customers the ability to change the way that they consumed and deployed load balancers in front of their applications.

To fulfill that need we delivered a product and business model that allowed our customers to replicate the software as many times as needed across COTS hardware, allowing them to get peak performance regardless of how many instances they used. If a customer needed more capacity they simply upgraded their subscription tier and deployed more copies of the product until they reached the bandwidth, request rate or transaction rates the license allowed.

This was attractive, and we had some success there, but soon we had a new insight…

Efficiency Over Performance

It became apparent to us that application architectures were changing and the value curve for our customers was changing along with them. We noticed in conversations with leading-edge teams that they were talking about concepts like efficiency, agility, velocity, footprint and horizontal scale. We also started to hear from innovators in the space about this new technology called Docker, and how it was going to change the way that applications and services were delivered.

The more we talked to these teams and thought about how we were developing our own internal applications the more we realized that a shift was happening. Teams were fundamentally changing how they were delivering their applications, and the result was our customers were beginning to care less about raw performance and more about distributed proxies. There were many benefits to this shift including reducing the failure domains of applications, increased flexibility in deployments and the ability for applications to store their proxy and network configuration as code alongside their application.

At the same time containers and container orchestration systems were just starting to come on the scene, so we went to work on delivering our LineRate product in a container with a new control plane and thinking deeply about how people would be delivering applications using these new technologies in the future.

These early conversations in 2015 drove us to think about what application delivery would look like in the future…

That Idea that Just Won’t Go Away

As we thought more about the future of application delivery, we began to focus on the concept of policy and network services in a cloud-native distributed application world. Even though we had many different priorities and projects to work on, the idea of a changing application landscape, cloud-native applications and DevOps based delivery models remained in the forefront of our minds.

There just has to be a market for something new in this space.

We came up with multiple projects that for various reasons never came to fruition. We lovingly referred to them as v1.0, v1.5, and v2.0. Each of these projects had unique approaches to solving challenges in distributed application architectures (microservices).

So we thought as big as we could. A next-gen ADC architecture: a control plane that’s totally API-driven and separate from the data plane. The data plane comes in any form you can think of: purpose-built hardware, software-on-COTS, or cloud-native components that live right near a microservice (like a service mesh). This infinitely scalable architecture smooths out all tradeoffs and works perfectly for any organization of any size doing any kind of work. Pretty ambitious, huh? We had fallen into the trap of being all things to all users.

Next, we refined our approach in “1.5”, and we decided to define a policy language… The key was defining that open-source policy interface and connecting that seamlessly to the datapath pieces that get the work done. In a truly open platform, some of those datapath pieces are open source too. There were a lot of moving parts that didn’t all fall into place at once; and in hindsight we should have seen some of them coming … The market wasn’t there yet, we didn’t have expertise in open source, and we had trouble describing what we were doing and why.

But the idea just kept burning in the back of our minds, and we didn’t give up…

For Version 2.0, we devised a plan that could help F5’s users who were getting started on their container journey. The technology was new and the market was just starting to mature, but we decided that customers would take three steps on their microservice journey:

  1. Experimenting - Testing applications in containers on a laptop, server or cloud instance.
  2. Production Planning - Identifying what technology is needed to start to enable developers to deploy container-based applications in production.
  3. Operating at Scale - Focus on increasing the observability, operability and security of container applications to reduce the mean-time-to-discovery (MTTD) and mean-time-to-resolution (MTTR) of outages.

We decided there was nothing we could do for experimenting customers, but for production planning, we could create an open source connector for container orchestration environments and BIG-IP. We called this the BIG-IP Container Connector, and we were able to solve existing F5 customers’ problems and start talking to them about the next step in their journey. The container connector team continues to this day to bridge the gap between ADC-as-you-know-it and fast-changing container orchestration environments.

We also started to work on a new lightweight containerized proxy called the Application Services Proxy, or ASP. Like Linkerd and Envoy, it was designed to help microservices talk to each other efficiently, flexibly and observably. Unlike Linkerd and Envoy, it didn’t have any open source community associated with it. We thought about our open source strategy and what it meant for the ASP.

At the same time, a change was taking place within F5…

Aspen Mesh - An F5 Innovation

As we worked on our go to market plans for ASP, F5 changed how it invests in new technologies and nascent markets through incubation programs. These two events, combined with the explosive growth in the container space, led us to the decision to commit to building a product on top of an existing open source service mesh. We picked Istio because of its attractive declarative policy language, scalable control-plane architecture and other things that we’ll cover in more depth as we go.

With a plan in place it was time to pitch our idea for the incubator to the powers that be. Aspen Mesh is the result of that pitch and the end of one journey, and the first step on a new one…

Parts two and three of this series will focus on why we decided to use Istio for our service mesh core and what you can expect to see over the coming months as we build the most fully supported enterprise service mesh on the market.


Top 3 Reasons to Manage Microservices with Service Mesh


Building microservices is easy, operating a microservice architecture is hard. Many companies are successfully using tools like Kubernetes for deploys, but they still face runtime challenges. This is where the service mesh comes in. It greatly simplifies the managing of containerized applications and makes it easier to monitor and secure microservice-based applications. So what are the top 3 reasons to use a supported service mesh? Here’s my take.

Security

Since service mesh operates on a data plane, it’s possible to apply common security across the mesh which provides much greater security than multilayer environments like Kubernetes. A service mesh secures inter-service communications so you can know what a service is talking to and if that communication can be trusted.

Observability

Most failures in the microservices space occur during the interactions between services, so a view into those transactions helps teams better manage architectures to avoid failures. A service mesh provides a view into what is happening when your services interact with each other. The mesh also greatly improves tracing capabilities and provides the ability to add tracing without touching all of your applications.

Simplicity

A service mesh is not a new technology, rather a bundling together of several existing technologies in a package that makes managing the infrastructure layer much simpler. There are existing solutions that cover some of what a mesh does, take for example DNS. It’s a good way to do service discovery when you don’t care about the source trying to discover the service. If all you need in service discovery is to find the service and connect to it, DNS is sufficient, but it doesn’t give you fast retries or health monitoring. When you want to ask more advanced questions, you need a service mesh. You can cobble things together to address much of what a service mesh addresses, but why would you want to if you could just interact with a service mesh that provides a one-time, reusable packaging?

There are certainly many more advantages to managing microservices with a service mesh, but I think the above 3 are major selling points where organizations that are looking to scale their microservice architecture would find the greatest benefit. No doubt there will also be expanded capabilities in the future such as analytics dashboards that provide easy to consume insights from the huge amount of data in a service mesh. I’d love to hear other ideas you might have on top reasons to use service mesh, hit me up @zjory.


Distributed tracing with Istio in AWS

Everybody loves tracing! Am I right? If you attended KubeCon (my bad, CloudNativeCon!) 2017 at Austin or saw any of the keynotes posted online, you would have noticed the recurring theme explaining the benefits of tracing, especially as it relates to DevOps tools. Istio and service mesh were hot topics and many sessions discussed how Istio provides distributed tracing out of the box making it easier for application developers to integrate tracing into their system.

Indeed, a great benefit of using service mesh is getting more visibility and understanding of your applications. Since this is a tech post (I remember categorizing it as such) let’s dig deeper in how Istio provides application tracing.

When using Istio, a sidecar envoy proxy is automatically injected next to your applications (in Kubernetes this means adding containers to the application Pod). This sidecar proxy intercepts all traffic and can add/augment tracing headers to the requests entering/leaving the application container. Additionally, the sidecar proxy also handles asynchronous reporting of spans to the tracing backends like Jaeger, Zipkin, etc. Sounds pretty awesome!

One thing that the applications do need to implement is propagating tracing headers from incoming to outgoing requests as mentioned in this Istio guide. Simple enough right? Well it’s about to get interesting.

Before we proceed further, first a little background on why I’m writing this blog. We, here at Aspen Mesh offer a supported enterprise service mesh built on open source Istio. Not only do we offer a service mesh product but we also use it in our production SaaS platform hosted in AWS (isn’t that something?).

I was tasked with propagating tracing headers in our applications so that we get nice hierarchical traces graphing the relationship between our microservices. As we are hosted in AWS, many of our microservices make outgoing requests to AWS services. During this exercise, I found some interesting interactions between adding tracing headers and using Istio with AWS services that I decided to share my experience. This blog describes various iterations I went through to get it all working together.

The application in question for this post is a simple web server. When it receives a HTTP request it makes an outbound DynamoDB query to fetch an item. As it is deployed in the Istio service mesh, the sidecar proxy automatically adds tracing headers to the incoming request. I wanted to propagate the tracing headers from the incoming request to the DynamoDB query request for getting all the traces tied together.

First Iteration

In order to achieve this I decided to pass a custom function as request options to the AWS DynamoDB API which allows you to augment request headers before they are transmitted over the wire. In the snippet below I’m using the AWS go-sdk’s dynamo.GetItemWithContext for fetching an item and passing AddTracingHeaders as the request.Option. Note that the AddTracingHeaders method uses standard opentracing API for injecting headers from a input context.

func AddTracingHeaders() awsrequest.Option {
  return func(req *awsrequest.Request) {
    if span := ot.SpanFromContext(req.Context()); span != nil {
      ot.GlobalTracer().Inject(
      span.Context(),
      ot.HTTPHeaders,
      ot.HTTPHeadersCarrier(req.HTTPRequest.Header))
    }
  }
}

// ctx is the incoming request's context as received from the mesh
func makeDynamoQuery(ctx context.Context ) {
  // Note that AddTracingHeaders is passed as awsrequest.Option
  result, err := dynamo.GetItemWithContext(ctx, ..., AddTracingHeaders())
  // Do something with result
}

Ok, time for testing this solution! The new version compiles, and I verified locally that it is able to fetch items from DynamoDB. After deploying the new version in production with Istio (sidecar injected) I’m hoping to see the traces nicely tied together. Indeed, the traces look much better but wait all of the responses from DynamoDB are now HTTP Status Code 400. Bummer!

Looking at the error messages from aws-go-sdk we are getting AccessDeniedException which according to AWS docs indicate that the signature is not valid. Adding tracing headers seems to have broken signature validation which is odd, yet interesting as I had tested in my dev environment (without sidecar proxy) and the DynamoDB requests worked fine, but in production it stopped working. Typical developer nightmare!

Digging into the AWS sdk package, I found that the client code signs every request including headers with a few hardcoded exceptions. The difference between the earlier and the new version is the addition of tracing headers to the request which are now getting signed and then handed to the sidecar proxy. Istio’s sidecar proxy (in this case Envoy) changes these tracing headers (as it should!) before sending it to DynamoDB service which breaks the signature validation at the server.

To get this fixed we need to ensure that the tracing headers are added after the request is signed but before it is sent out by the AWS sdk. This is getting more complicated, but still doable.

Second Iteration

I couldn’t find an easy way to whitelist these tracing headers and prevent them from getting them signed. But, AWS session package provides a very flexible API for adding custom handlers which get invoked in various stages of the request lifecycle. Additionally, providing a session handler has the benefit of being added in all AWS service requests (not just DynamoDB) which use that session. Perfect!

Here’s the AddTracingHeaders method above added as a session handler:

sess, err := session.NewSession(cfg)

// Add the AddTracingHeaders as the first Send handler. This is important as one
// of the default Send handlers does the work of sending the request.
sess.Handlers.Send.PushFront(AddTracingHeaders)

This looks promising. Testing showed that the first request to the AWS DynamoDB service is successful (200 Ok!) Traces look good too! We are getting somewhere, time to test some failure scenarios.

I added a Istio fault injection rule to return a HTTP 500 error on outgoing DynamoDB requests to exercise the AWS sdk’s retry logic. Snap! receiving the HTTP Status Code 400 with AccessDeniedException error again on every retry.

Looking at the AWS request send logic, it appears that on retriable errors the code makes a copy of the previous request, signs it and then invokes the Send handlers. This means that on retries the previously added tracing headers would get signed again (i.e. earlier problem is back, hence 400s) and then the AddTracingHeaders handler would add back the tracing headers.

Now that we understand the issue, the solution we came up with is to add the tracing headers after the request is signed and before it is sent out just like the earlier implementation. In addition, to make retries work we now need to remove these headers after the request is sent so that the resigning and reinvocation of AddTracingHeaders is handled correctly.

Final Interation

Here’s what the final working version looks like:

func injectFromContextIntoHeader(ctx context.Context, header http.Header) {
  if span := ot.SpanFromContext(ctx); span != nil {
    ot.GlobalTracer().Inject(
    span.Context(),
    ot.HTTPHeaders,
    ot.HTTPHeadersCarrier(header))
  }
}

func AddTracingHeaders() awsrequest.Option {
  return func(req *awsrequest.Request) {
    injectFromContextIntoHeader(req.Context(), req.HTTPRequest.Header)
  }
}

// This is a bit odd, inject tracing headers into an empty header map so that
// we can remove them from the request.
func RemoveTracingHeaders(req *awsrequest.Request) {
  header := http.Header{}
  injectFromContextIntoHeader(req.Context(), header)
  for k := range header {
    req.HTTPRequest.Header.Del(k)
  }
}

sess, err := session.NewSession(cfg)

// Add the AddTracingHeaders as the first Send handler.
sess.Handlers.Send.PushFront(AddTracingHeaders)

// Pushback is used here so that this handler is added after the request has
// been sent.
sess.Handlers.Send.PushBack(RemoveTracingHeaders)

Agreed, above solution looks far from elegant but it does work. I hope this post helps if you are in a similar situation.

If you have a better solution feel free to reach out to me at neeraj@aspenmesh.io


Building Istio with Minikube-in-a-Container and Jenkins

AspenMesh provides a supported distribution of Istio, which means that we need to be able to test and release bugfixes even if they are out-of-cadence with the upstream Istio project. To do this we’ve developed our own build and test infrastructure. Now that we’ve got many of these pieces up and running, we figured some parts might be useful if you are also interested in CI for Istio but not committed to Circle CI or GKE.

This post will show how we made an updated Minikube-in-a-Container and a Jenkins pipeline that uses it to build and test Istio. If you want, you can docker run the minikube container right now and get a functioning Kubernetes cluster inside the container that you can throw away when you’re done. The Jenkins bits will help you build Istio today and also give you a head-start if you want to build containers inside of containers.

Minikube-in-a-Container

This part describes how we made a Minikube-in-a-container that we use to run the Istio smoke tests during a build. This isn’t our idea - we started with localkube-dind. We couldn’t get it working out-of-the-box, we think due to a little bit of drift between localkube and minikube, so this is a record of what we changed to get it working for us. We also added some options and tooling so that we can use Istio in the resulting container. Nothing too fancy but we’re hoping it gives you a head start if you’re heading down a similar path.

Minikube may be familiar to you as a project to start up your own Kubernetes cluster in a VM that you can carry around on your laptop. This approach is very convenient but there are some situations where you can’t/don’t want to provision a VM, like cloud providers that don’t offer nested virtualization. Since docker can now run inside of docker, we decided to try making our own Kubernetes cluster inside of a docker container. An ephemeral Kubernetes container is easy to start, run a few tests, and throw away when you’re done and is a good fit for CI.

In our model, the Kubernetes cluster creates child docker containers (not sibling containers in the lingo of Jérôme Petazzoni’s consideration ). We did this intentionally - we preferred the isolation of child containers over sharing the docker build cache. But you should check out Jérôme’s article before committing to DinD for your application - maybe DooD (Docker-outside-of-Docker) is better for you. FYI - we’ve avoided the “it gets worse” part, and it looks like the “bad” and “ugly” parts are fixed/avoidable for us.

When you start a docker container, you’re asking docker to create and setup a few namespaces in the kernel, and then start your container inside these namespaces. A namespace is a sandbox - when you’re inside the namespace, you can generally only see other things that are also inside the namespace. A chroot, but for more than just filesystems - PIDs, network interfaces, etc. If you start a docker container with --privileged then the namespaces that are created get extra privileges, like the ability to create more child namespaces. That’s the trick at the core of docker-in-docker. For any more details, again, Jérôme’s the expert - check out his explanation (complete with Xzibit memes) here.

OK, so here’s the flow:

  1. Build a container that’s got docker, minikube, kubectl and dependencies installed.
  2. Add a “fake-systemctl” shim to trick Minikube into running without a real systemd installation.
  3. Start the container with --privileged
  4. Have the container start its own “inner” dockerd - this is the DinD part.
  5. Have the container start minikube --vm-driver=none so that minikube (in the container) talks to the dockerd running right alongside it.

All you have to do is docker run --privileged this container and you’re ready to go with kubectl. If you want, you can run the kubectl inside the container and get a truly throw-away environment.

You can try it now:

docker run --privileged --rm -it quay.io/aspenmesh/minikube-dind
docker exec -it <container> /bin/bash
# kubectl get nodes
<....>
# kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/shell-demo.yaml
# kubectl exec -it shell-demo -- /bin/bash

when you exit, the --rm flag means that docker will tear down and throw away everything for you.

For heavier usage, you’ll probably want to “docker cp” the kubeconfig file to your host and talk to kubernetes inside the container over the exposed kube API port 8443.

Here’s the Dockerfile that makes it go (you can clone this and support scripts here):

# Portions Copyright 2016 The Kubernetes Authors All rights reserved.
# Portions Copyright 2018 AspenMesh
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Based on:
# https://github.com/kubernetes/minikube/tree/master/deploy/docker/localkube-dind
FROM debian:jessie
# Install minikube dependencies
RUN DEBIAN_FRONTEND=noninteractive apt-get update -y && \
DEBIAN_FRONTEND=noninteractive apt-get -yy -q --no-install-recommends install \
iptables \
ebtables \
ethtool \
ca-certificates \
conntrack \
socat \
git \
nfs-common \
glusterfs-client \
cifs-utils \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common \
bridge-utils \
ipcalc \
aufs-tools \
sudo \
&& DEBIAN_FRONTEND=noninteractive apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install docker
RUN \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
apt-key export "9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88" | gpg - && \
echo "deb [arch=amd64] https://download.docker.com/linux/debian jessie stable" >> \
/etc/apt/sources.list.d/docker.list && \
DEBIAN_FRONTEND=noninteractive apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get -yy -q --no-install-recommends install \
docker-ce \
&& DEBIAN_FRONTEND=noninteractive apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
VOLUME /var/lib/docker
EXPOSE 2375
# Install minikube
RUN curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.24.1/minikube-linux-amd64 && chmod +x minikube
ENV MINIKUBE_WANTUPDATENOTIFICATION=false
ENV MINIKUBE_WANTREPORTERRORPROMPT=false
ENV CHANGE_MINIKUBE_NONE_USER=true
# minikube --vm-driver=none checks systemctl before starting. Instead of
# setting up a real systemd environment, install this shim to tell minikube
# what it wants to know: localkube isn't started yet.
COPY fake-systemctl.sh /usr/local/bin/systemctl
EXPOSE 8443
# Install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.9.1/bin/linux/amd64/kubectl && \
chmod a+x kubectl && \
mv kubectl /usr/local/bin
# Copy local start.sh
COPY start.sh /start.sh
RUN chmod a+x /start.sh
# If nothing else specified, start up docker and kubernetes.
CMD /start.sh & sleep 4 && tail -F /var/log/docker.log /var/log/dind.log /var/log/minikube-start.log
view rawDockerfile.minikube hosted with ❤ by GitHub

Jenkins for Istio

Now that we’ve got Kubernetes-in-a-container we can use this for our Istio builds. Dockerized build systems are nice because developers can quickly create higher fidelity replicas of the CI build. Here’s an outline of our CI architecture for Istio builds:

  • Jenkins worker: This is a VM started by Jenkins for running builds. It may be shared by other builds at the same time. It’s important that any tooling we install on the worker is locally-scoped (so it doesn’t interfere with other builds) and ephemeral (we autoscale Jenkins workers to save costs).
  • Minikube container: The first thing we do is build and enter the Minikube container we talked about above. The rest of the build proceeds inside this container (or its children). The Jenkins workspace is mounted here. Jenkins’ docker plugin takes care of tearing this container down in success or failure, which is all we need to clean up all the running Kubernetes and Istio components.
  • Builder container: This is a container with build tools like the golang toolchain installed. It’s where we compile Istio and build containers for the Istio components. We test those components in the minikube container, and if they pass, declare the build a success and push the containers to our registry.

Most of the Jenkinsfile is about getting those pieces set up. After that, we run the same steps to build Istio that you would on your laptop: make dependmake buildmake test.

Check out the Jenkinsfile here:

node('docker') {
properties([disableConcurrentBuilds()])
wkdir = "src/istio.io/istio"
stage('Checkout') {
checkout scm
}
// withRegistry writes to /home/ubuntu/.dockercfg outside of the container
// (even if you run it inside the docker plugin) which won't be visible
// inside the builder container, so copy them somewhere that will be
// visible. We will symlink to .dockercfg only when needed to reduce
// the chance of accidentally using the credentials outside of push
docker.withRegistry('https://quay.io', 'name-of-your-credentials-in-jenkins') {
stage('Load Push Credentials') {
sh "cp ~/.dockercfg ${pwd()}/.dockercfg-quay-creds"
}
}
k8sImage = docker.build(
"k8s-${env.BUILD_TAG}",
"-f $wkdir/.jenkins/Dockerfile.minikube " +
"$wkdir/.jenkins/"
)
k8sImage.withRun('--privileged') { k8s ->
stage('Get kubeconfig') {
sh "docker exec ${k8s.id} /bin/bash -c \"while ! [ -e /kubeconfig ]; do echo waiting for kubeconfig; sleep 3; done\""
sh "rm -f ${pwd()}/kubeconfig && docker cp ${k8s.id}:/kubeconfig ${pwd()}/kubeconfig"
// Replace "127.0.0.1" with the path that peer containers can use to
// get to minikube.
// minikube will bake certs including the subject "kubernetes" so
// the kube-api server needs to be reachable from the client's concept
// of "https://kubernetes:8443" or kubectl will refuse to connect.
sh "sed -i'' -e 's;server: https://127.0.0.1:8443;server: https://kubernetes:8443;' kubeconfig"
}
builder = docker.build(
"istio-builder-${env.BUILD_TAG}",
"-f $wkdir/.jenkins/Dockerfile.jenkins-build " +
"--build-arg UID=`id -u` --build-arg GID=`id -g` " +
"$wkdir/.jenkins",
)
builder.inside(
"-e GOPATH=${pwd()} " +
"-e HOME=${pwd()} " +
"-e PATH=${pwd()}/bin:\$PATH " +
"-e KUBECONFIG=${pwd()}/kubeconfig " +
"-e DOCKER_HOST=\"tcp://kubernetes:2375\" " +
"--link ${k8s.id}:kubernetes"
) {
stage('Check') {
sh "ls -al"
// If there are old credentials from a previous build, destroy them -
// we will only load them when needed in the push stage
sh "rm -f ~/.dockercfg"
sh "cd $wkdir && go get -u github.com/golang/lint/golint"
sh "cd $wkdir && make check"
}
stage('Build') {
sh "cd $wkdir && make depend"
sh "cd $wkdir && make build"
}
stage('Test') {
sh "cp kubeconfig $wkdir/pilot/platform/kube/config"
sh """PROXYVERSION=\$(grep envoy-debug $wkdir/pilot/docker/Dockerfile.proxy_debug |cut -d: -f2) &&
PROXY=debug-\$PROXYVERSION &&
curl -Lo - https://storage.googleapis.com/istio-build/proxy/envoy-\$PROXY.tar.gz | tar xz &&
mv usr/local/bin/envoy ${pwd()}/bin/envoy &&
rm -r usr/"""
sh "cd $wkdir && make test"
}
stage('Push') {
sh "cd && ln -sf .dockercfg-quay-creds .dockercfg"
sh "cd $wkdir && " +
"make HUB=yourhub TAG=$BUILD_TAG push"
gitTag = getTag(wkdir)
if (gitTag) {
sh "cd $wkdir && " +
"make HUB=yourhub TAG=$gitTag push"
}
sh "cd && rm .dockercfg"
}
}
}
}
String getTag(String wkdir) {
return sh(
script: "cd $wkdir && " +
"git describe --exact-match --tags \$GIT_COMMIT || true",
returnStdout: true
).trim()
}
view rawJenkinsfile hosted with ❤ by GitHub

If you want to grab the files from this post and the supporting scripts, go here.


Aspen Mesh Enterprise Service Mesh

Introducing Aspen Mesh - The Enterprise Service Mesh

Today we are very excited to introduce Aspen Mesh, an enterprise service mesh built on the open source project Istio. After talking to development and operations teams it became clear that microservices are great for development velocity, but the complexity and risk in these architectures lies in the service-to-service communication that microservices depend on. We have taken an application first approach to provide a communication fabric for microservices, called a service mesh. Our supported service mesh DevOps teams have the flexibility and autonomy they desire while providing the policy, visibility and insights into their microservice environment that operations teams demand for production-grade applications.

What is Aspen Mesh?

It’s a service mesh.

I know what you are thinking… “So what?”

We’ll have plenty more to say about that in the future, but for now think about all the network, security and telemetry services you use for your traditional monolithic applications.

Now think about your plans for microservices. Maybe you plan to have 10, 50, 100 or 1000’s of services running in your Kubernetes cluster. How do you get all those services in your new microservice and container environments in an efficient, uniform way.

Do you know who is talking to who and if they are allowed to? Is that communication secure? How do you debug something when it goes down? How do you add tracing or logging without touching all your applications? Do you know what the performance or quality impacts of releasing a new version of one of those services is on the upstream and downstream services?

A service mesh helps answer those questions. As a transparent infrastructure layer that is inserted between your microservice and the network a service mesh gives you a single point in the communication path of your applications to insert services and gather telemetry. You can do this without requiring changes to your applications.

What are Aspen Mesh’s Benefits Over Open Source?

We think open source is great! In fact, we think some projects are so awesome that we decided to use them in our product. Aspen Mesh is built on an open core model and our Enterprise Service Mesh is a packaged and supported distribution of Istio and Envoy.

Because having a choice is important, we have taken a unique approach to our product that allows you the most flexibility in how you deploy a service mesh in your environment. Aspen Mesh consists of our hosted SaaS platform for visibility, analytics and policy management and our supported Enterprise Service Mesh distribution.

Aspen Mesh’s Enterprise Service Mesh Distribution can be deployed by customers who require product support and services for their production systems. We version, build, package, test and document our distribution and we fully support our customers throughout their microservices journey. Using our distribution of Istio gives you access to our feature set in both the service mesh as well as our hosted portal, and it is fully supported.

Our Hosted SaaS Platform can be used with the community version of Istio. So if you are passionate about using open source, just exploring the concepts of containers and service mesh, or have already deployed Istio, using the portal alone is an option. As an open source user you get visibility, predictive analytics and policy management as well as a hosted option for logging and tracing infrastructure. Our enterprise customers have access to features and functionality that can only be provided when using our enterprise distribution.

How Do I Get Started with Aspen Mesh?

The concept of service mesh is brand new. In fact, until 2018 was declared “The Year of the Service Mesh” at KubeCon in December, most people had never heard of a service mesh. But, we have been working on this concept in different ways for a while now and are able to offer early access to the product for interested customers.

We are looking for teams on their container journey who are looking to solve real problems with their applications. We need partners who are excited to work with us and understand the value of a strong relationship.

Not everyone is cut out for the next big thing, but if you think you are up to the challenge we would love to talk to you and your team.

Join our early access program today.