Container orchestration

Going Beyond Container Orchestration

Every survey of late tells the same story about containers; organizations are not only adopting but embracing the technology. Most aren't relying on containers with the same degree of criticality as hyperscale organizations. That means they are one of the 85% of organizations IDC found in a Cisco-sponsored survey of over 8000 enterprises are using containers in production. That sounds impressive, but the scale at which they use them is limited. In a Forrester report commissioned by Dell EMC, Intel, and Red Hat, 63% of enterprises using containers have more than 100 instances running. 82% expect to be doing the same by 2019. That's a far cry from the hundreds of thousands in use by hyperscale technology companies.

And though the adoption rate is high, that's not to say that organizations haven't dabbled with containers only to abandon the effort. As with any (newish) technology, challenges exist. At the top of the list for containers are suspects you know and love: networking and management.

Some of the networking challenges are due to the functionality available in popular container orchestration environments like Kubernetes. Kubernetes supports microservices architectures through its service construct. This allows developers and operators to abstract the functionality of a set of pods and expose it as "a service" with access via a well-defined API. Kubernetes supports naming services as well as performing rudimentary layer 4 (TCP-based) load balancing.

The problem with layer 4 (TCP-based) load balancing is its inability to interact with layer 7 (application and API layers). This is true for any layer 4 load balancing; it's not something unique to containers and Kubernetes. Layer 4 offers visibility into connection level (TCP) protocols and metrics, but nothing more. That makes it difficult (impossible, really) to address higher-order problems such as layer 7 metrics like requests or transactions per second and the ability to split traffic (route requests) based on path. It also means you can't really do rate limiting at the API layer or support key capabilities like retries and circuit breaking.

The lack of these capabilities drives developers to encode them into each microservice instead. That results in operational code being included with business logic. This should cause some amount of discomfort, as it clearly violates the principles of microservice design. It's also expensive as it adds both architectural and technical debt to microservices.

Then there's management. While Kubernetes is especially adept at handling build and deploy challenges for containerized applications, it lacks key functionality needed to monitor and control microservice-based apps at runtime. Basic liveliness and health probes don't provide the granularity of metrics or the traceability needed for developers and operators to quickly and efficiently diagnose issues during execution. And getting developers to instrument microservices to generate consistent metrics can be a significant challenge, especially when time constraints are putting pressure on them to deliver customer-driven features.

These are two of the challenges a service mesh directly addresses: management and networking.

How Service Mesh Answers the Challenge

Both are more easily addressed by the implementation of a service mesh as a set of sidecar proxies. By plugging directly into the container environment, sidecar proxies enable transparent networking capabilities and consistent instrumentation. Because all traffic is effectively routed through the sidecar proxy, it can automatically generate and feed the metrics you need to the rest of the mesh. This is incredibly valuable for those organizations that are deploying traditional applications in a container environment. Legacy applications are unlikely to be instrumented for a modern environment. The use of a service mesh and its sidecar proxy basis enable those applications to emit the right metrics without requiring code to be added/modified.

It also means that you don't have to spend your time reconciling different metrics being generated by a variety of runtime agents. You can rely on one source of truth - the service mesh - to generate a consistent set of metrics across all applications and microservices.

Those metrics can include higher order data points that are fed into the mesh and enable more advanced networking to ensure fastest available responses to requests. Retry and circuit breaking is handled by the sidecar proxy in a service mesh, relieving the developer from the burden of introducing operational code into their microservices. Because the sidecar proxy is not constrained to layer 4 (TCP), it can support advanced message routing techniques that rely on access to layer 7 (application and API).

Container orchestration is a good foundation, but enterprise organizations need more than just a good foundation. They need the ability to interact with services at the upper layers of the stack, where metrics and modern architectural patterns are implemented today.

Both are best served by a service mesh. When you need to go beyond container orchestration, go service mesh.


API Gateway vs Service Mesh

API Gateway vs Service Mesh

One of the recurring questions we get when talking to people about a service mesh is, "How is it different from an API gateway?" It's a good question. The overlap between API gateway and service mesh patterns is significant. They can both handle service discovery, request routing, authentication, rate limiting and monitoring. But there are differences in architectures and intentions. A service mesh's primary purpose is to manage internal service-to-service communication, while an API Gateway is primarily meant for external client-to-service communication.

Do You Need Both?

You may be wondering if you need both an API gateway and a service mesh.  Today you probably do, but as service mesh evolves, we believe it will incorporate much of what you get from an API gateway today.

The main purpose of an API gateway is to accept traffic from outside your network and distribute it internally. The main purpose of a service mesh is to route and manage traffic within your network. A service mesh can work with an API gateway to efficiently accept external traffic then effectively route that traffic once it's in your network. The combination of these technologies can be a powerful way to ensure application uptime and resiliency, while ensuring your applications are easily consumable.

In a deployment with an API gateway and a service mesh, incoming traffic from outside the cluster would first be routed through the API gateway, then into the mesh. The API gateway could handle authentication, edge routing and other edge functions, while the service mesh provides fine-grained observability of and control of your architecture.

The interesting thing to note is that service mesh technologies are quickly evolving and are starting to take on some of the functions of an API gateway. A great example is the introduction of the Istio v1alpha3 routing API which is available in Aspen Mesh 1.0. Prior to this, Istio had used Kubernetes ingress control which is pretty basic so it made sense to use an API gateway for better functionality. But, the increased functionality introduced by the v1alpha3 API has made it easier to manage large applications and to work with with protocols other than HTTP, which was previously something an API gateway was needed to do effectively.

What The Future Holds

The v1alpha3 API provides a good example of how a service mesh is reducing the need for API gateway capabilities. As the cloud native space evolves and more organizations move to using Docker and Kubernetes to manage their microservice architectures, it seems highly likely that service mesh and API gateway functionality will merge. In the next few years, we believe that standalone API gateways will be used less and less as much of their functionality will be absorbed by service mesh.


Enterprise Service Mesh

From Middleware to Containers: Infrastructure is Finally Cool

As someone fresh out of school just starting my software engineering career, I want to solve interesting problems. Who doesn’t? A computer science degree gave me the opportunity see a spectrum of different engineering opportunities, which led me to decide that working on infrastructure would be the most impactful area, and with the rise of cloud native technologies, actually a compelling space to work in. There is a difference between developing new functionality and developing to solve existing problems. More often than not, the solutions that address existing challenges in an industry are the ones the are used the most and last the longest. This is what excites me about working on infrastructure, the ability to build something that millions of people will rely on to run their applications. On the surface it doesn’t appear to be the most exciting work, but you can be sure that your time and effort is being put to good use.

You want to see your contributions make an impact somehow, whether that’s writing webapps, iPhone applications, business tools, etc. - the things that people actually use day-to-day. Infrastructure may not be as visible or as tangible as these kinds of technologies, but it’s gratifying to know that it’s the underlying piece that makes it all work. As much as I want to be able to say that I contribute to something that all of my non-tech friends can easily understand (like the front-end of Netflix), I think it’s even more interesting to make them think about the things that happen behind the scenes. We all expect our favorite apps, websites, etc. to be able to respond quickly to our requests no matter how many people are using them at the same time, but on the backend this is not something that is easy to handle and properly test for. What about security? We also expect that when we are trusting software with our information that it isn’t being easily intercepted or leaked along the way. Scalability and security are just two of many kinds of problems that software infrastructure incorporates, and in the end we are relying on them to actually make the front-end software usable. The advantage these days is that infrastructure software has become an incredibly interesting space to be in. Tools like Docker, Kubernetes and Istio are fascinating technologies with vibrant communities around them.

One of the cool, heavily used Kubernetes-related projects that I’m a fan of is Envoy. I can’t help but think about how some version of Envoy is being used every time I order a Lyft to make sure I actually get a ride. Infrastructure doesn’t seem as intriguing at first because as important it is, it’s running in the background and easily forgotten. Everyone needs it, but in the end, who wants to build it? The answer to that question is definitely changing as the infrastructure landscape evolves. Kubernetes, the OS of the cloud, has become a project that everyone wants a hand in. You don’t hear about people itching to make contributions to the Linux kernel, but you hear about Kubernetes and containers everywhere.

Coming up with solutions to solve the problems that we’re running into today has become more attractive to junior developers especially. We’re watching as more and more people are using technology every day, and like I mentioned before, we want our contributions to be impactful. How are we going to handle all of this traffic in a smooth and scalable way? Enter: distributed systems. Microservices are critical to constructing applications that can handle huge transaction volumes at scale. Enterprise applications run by companies like Lyft, Twitter and Google would fall apart with even normal rates of traffic without their distributed architectures. Working on these infrastructural pieces is challenging, and provides the impact that we, junior developers, are looking for.

Another thing that makes this work enticing to junior developers is that it involves an open source community. The way that the tech community has decided to solve some of these bigger, infrastructure-related problems has largely been through open source, which is both intimidating and inviting to those who are new to the tech industry. There is an open group of people talking about the technology and a community willing to help, but at the same time it’s daunting to contribute to these bigger open source projects when you’re just starting out. I will say, however, that the benefits of being able to leverage so many technologies and the community support make it a lot of fun to be a part of.

To recap, here are some of my favorite things about working on infrastructure:

  • We can solve some really hard problems with good infrastructure!
  • If it’s done right, you can build something that can be easily customized to solve problems of various sizes and for all kinds of use cases.
  • All of the cool things and services we consume daily rely on it. Talk about actually seeing your hard work being put to good use!
  • Whether you’re doing proprietary work or not, you are being introduced to open source and the community that comes with it.

I’ll admit, developing infrastructure, despite all of the interesting bits, is still not the most glamorous work. It’s the underlying technology that most people take for granted in their everyday use of technology, and is often less shiny than a beautifully designed UI and other components that sit on top of it. But once you dig in, it’s exciting to see what an impact you can make with it and cloud-native technologies and communities make it a fun space to work in. What I will say though is that it’s a great way to start out your career in tech, and it’s a fun, challenging, and very rewarding place to be.