photo of compass

The Complete Guide to Service Mesh

Service meshes are new, extremely powerful and can be complex. If you’ve been asking questions like “What is a service mesh?” “Why would I use one?” “What benefits can it provide?” or “How did people even come up with the idea for service mesh?” then The Complete Guide to Service Mesh is for you.

Check out the free guide to find out:


abstract technology encryption graphic

Service Mesh University

Catch up on all things service mesh in these seven, on-demand videos with the experts that help you learn more at your own pace. Everything is organized into bite size sections including:


istiocon 9 trends

Top 9 Takeaways from IstioCon 2021

At the beginning of last year, we predicted the top three developments around service mesh in 2020 would be:

  1. A quickly growing need for service mesh
  2. Istio will be hard to beat
  3. Core service mesh use cases will emerge that will be used as models for the next wave of adopters

And we were right about all three, as evidenced by what we learned at IstioCon.

As a new community-led event, IstioCon 2021 provided the first organized opportunity for Istio’s community members to gather together on a large, worldwide scale, to present, learn and discuss the many features and benefits of the Istio service mesh. And this event was a resounding success.

With over 4,000 attendees — in its first year, and as a virtual event — IstioCon attendance exceeded expectations by multiples. The event showcased the lessons learned from running Istio in production, first-hand experiences from the Istio community, and featured maintainers from across the Istio ecosystem including Lin Sun, John Howard, Christian Posta, Neeraj Poddar, and more. With sessions presented across five days in English, as well as keynotes and sessions in Chinese, this was indeed a worldwide effort. It is well-known that the Istio community reaches far and wide, but it was fantastic to see that so many people interested in, considering, and even using Istio in production at scale were ready to show up and share.

But apart from the outstanding response of the Istio community, we were particularly excited to dig into what people are really using this service mesh for and how they’re interacting with it. So, we’ve pulled together the below curated list of top Istio trends, hot topics, and our top three list of sessions you don’t want to miss.

Top 3 Istio Service Mesh Trends to Watch

After watching each session (so you don’t have to!), we’ve distilled the top three service mesh and Istio industry takeaways that came out of IstioCon that you should keep on your radar.

1. Istio is production-ready. No longer just a shiny new object, this nascent technology has transformed over the past few years from a new infrastructure technology into the microservices management technology that people are using, now, in production and at scale at real companies. We saw insightful user story presentations from T-Mobile, Airbnb, eBay, Salesforce, FICO, and more.

2. Istio is more versatile than you thought. Did you know that Istio is being used right now by users and companies to manage everything from user-facing applications like Airbnb to behind-the-scenes infrastructure like running 5G?

3. Istio and Kubernetes have a lot in common. There are lots of similarities between Istio and Kubernetes in terms of how these technologies have developed, and how they are being adopted. It’s well known that Kubernetes is “the defacto standard for cloud native applications.” Istio is being called ”the most popular service mesh” according to the CNCF annual user survey. But more than this, the two are growing closer together in terms of the technologies themselves. We look forward to the growth of both technologies.

Top 3 Hot Topics

In addition to higher level industry trends, there were many other hot topics that surfaced as part of this conference. From security to Wasm, multicluster, integrations, policies, ORAS, and more, there is a lot going on in the service mesh marketplace that many folks may not have realized. Here are the three hot topics we’d like you to know about:

1. Mulitcluster. You can configure a single mesh to include multiple clusters. Using a multicluster deployment within a single mesh affords capabilities beyond that of a single cluster deployment, including fault isolation and fail over, location-aware routing, various control plane models, and team or project isolation. It was indeed a hot topic at IstioCon, with an entire workshop devoted to Istio Multicluster, plus two additional individual sessions and a dedicated office-hours session about multicluster.

2. Wasm. WebAssembly (Wasm) is a sandboxing technology that can be used to extend the Istio proxy (Envoy). The Proxy-Wasm sandbox API replaces Mixer as the primary extension mechanism in Istio. Over the past year, Wasm has come further to the forefront in terms of interest, as seen here by garnering two sessions plus its own office-hours session.

3. Security. Let’s face it, we’re all concerned about security, and with good reason. Istio has decided to face security challenges head on, and while not exactly a new topic, it’s one worth reiterating. The Istio Product Security Working Group had a session, plus we saw two more sessions featuring security as a headliner, and a dedicated office-hours session. 

Side note: Aspen Mesh had a tie with one another hot topic; debugging Istio. If you get a chance, check out the three recorded sessions on debugging as well.

Top 3 Sessions You Will Want to Watch On-demand

Not everyone has time to watch a conference for five days in a row. And that’s ok. There are about 77 sessions we wish you could watch, but we’ve also identified the top three we think you’ll get the most out of. Check these out:

1. Using Istio to Build the Next Generation 5G Platform. As the most-watched session at this event, we have to start here. In this session, Aspen Mesh’s Co-founder and Chief Architect Neeraj Poddar and David Lenrow, Senior Principal Cloud Security Architect at Verizon, covered what 5G is and why it matters, architecture options with Istio, platform requirements, security, and more.

2. User story from Salesforce - The Salesforce Service Mesh: Our Istio Journey. In this session, Salesforce Software Architect Pratima Nambiar talked us through their background around why they needed a service mesh, their initial implementation, Istio’s value, progressive adoption of Istio, and features they are watching and expect to adopt. 

3. User story from eBay - Istio at Scale: How eBay is Building a Massive Multitenant Service Mesh Using Istio. In this session, Sudheendra Murthy covered eBay’s story, from their applications deployment to service mesh journey, scale testing, and future direction.

What’s Next for Istio?

We were excited to be part of this year’s IstioCon, and it was wonderful to see the Istio community come together for this new event. As our team members have been key contributors to the Istio project over the past few years, we’ve had a front row seat at the growth of the project itself along with the community.

To learn more about what the Istio project has coming up on the horizon, check out this project roadmap session. We’re looking forward to the continued growth of this open source technology, so that more companies — and people — can benefit from what it has to offer.


stock photo of people moving about on sidewalk

Take on cloud-native: It's truly transformative for enterprise IT (451 Research)

The 451 Take on cloud-native: truly transformative for enterprise IT

MARCH 15 2019 

By Jay Lyman, Fernando Montenegro, Matt Aslett, Owen Rogers, Melanie Posey, Brian Partridge, William Fellows, Simon Robinson, Mike Fratto, Liam Rogers  

Helping to shape the modern software development and IT operations paradigms, cloud-native represents a significant shift in enterprise IT. In this report, we define cloud-native and offer some perspective on why it matters and what it means for the industry. 

In this report, 451 Research presents our definition of cloud-native and the key technologies and methodologies that are representative of the trend, including containers, Kubernetes, service mesh and serverless. We recognize the importance of cloud-native based on our survey research and conversations with enterprise providers and end users. Containers and serverless are among the top IaaS features in use and planned for use, according to our Voice of the Enterprise: Digital Pulse, Budgets & Outlook, 2019 survey.  

Cloud-native technologies and methodologies – a departure from monolithic applications and waterfall release processes – are being driven by a desire for speed, efficiency, and support for applications and services that are distributed across hybrid infrastructure such as public clouds, private clouds and on-premises environments. There are, nevertheless, significant challenges with cloud-native approaches, mainly around complexity and lack of available skills and experience. Indeed, access to talent is becoming a key constraint for enterprises transforming around cloud and cloud-native (see Figure 1 below). 

We expect the cloud-native trend to continue to grow, fueled in part by intersections with adjacent technologies and trends, including data and analytics, AI and ML, security, and IoT/edge computing – all of which play a role in facilitating digital transformation. We also expect the cloud-native market, populated by a burgeoning number of startups, as well as established giants, to undergo consolidation as vendors seek to gain talent and the market matures. 

 

The 451 Take

Just like DevOps, cloud-native technologies and methodologies are now being attached to digital transformation efforts, and are expanding their presence in enterprise IT. Right now, the cloud-native trend consists mainly of containers, microservices, Kubernetes, service mesh and serverless, but we may see intersections of these different approaches with adjacent trends, as well as new ones. In addition to application development and deployment in the public cloud, cloud-native is connected to private and hybrid clouds and the ability to run applications consistently across different IT environments. Kubernetes, for example, is not only a container management and orchestration software, it is also a distributed application framework – one that is timed well with enterprise use of hybrid environments that span multiple clouds, as well as on-premises infrastructure. 

Cloud-native software is also closely intertwined with open source. Nearly all of the key software components are open source projects, and we believe open source to be table stakes for cloud-native software. There is still ample commercial opportunity around cloud-native. We would also highlight that cloud-native is also not limited to public cloud platforms, with on-premises environments increasingly serving as the basis for cloud-native approaches. We also see cloud-native crossing over with adjacent trends. Such intersections and integrations bode well for continued growth and significance of cloud-native approaches. It remains to be seen which approach within the cloud-native arena will be most effective and which combination of different technologies paves the best path forward for enterprise and service-provider organizations, but we will continue to track the technology, use cases and impact of cloud-native going forward, including survey data, market sizing and other research. 

Figure 1: Cloud Skills Gaps – Roadblock to Optimized Cloud Leverage 

Source: 451 Research, Voice of the Enterprise: Cloud, Hosting & Managed Services, Organizational Dynamics 2018 

graphic of cloud skills in short supply

Cloud-native defined

451 Research defines cloud-native software as: applications designed from the ground up to take advantage of cloud computing architectures and automated environments, and to leverage API-driven provisioning, auto-scaling and other operational functions. Cloud-native architecture and software include applications that have been redesigned to take advantage of cloud computing architectures, but are not limited to cloud applications – we see cloud-native technologies and practices present in on-premises environments in the enterprise. We can also define cloud-native by the technologies and approaches that characterize the trend, all intended to make software development and deployment more fluid and composable – containers, microservices, Kubernetes, service mesh and serverless. 

Our research and conversations indicate that these different types of cloud-native application development and deployment are by no means exclusive in enterprise organizations, which are typically leveraging multiple cloud-native technologies and methodologies across their many different releases and teams. Rather than competing components, tools and methods, the different technologies of cloud-native software are similar to hybrid cloud, which is representative of a best-tool-for-the-job or Best Execution Venue (BEV) approach. We also contend that cloud-native is far broader than application development and deployment. Cloud-native also includes application and infrastructure architecture and organizational approach. 

From an economic point of view, cloud-native technologies enable the true value of cloud by allowing applications to scale and evolve in much shorter timelines than previously. This scalability creates new opportunities for the business in terms of revenue growth, efficiency improvements or a better customer experience. However, cloud entropy means that scalability leads to great complexity, which is where the likes of Kubernetes, Istio, Prometheus and others come into play. The raison d’etre for these open source components is to keep track of the fluid and complex deployments of cloud-native services.  

In terms of applications, we see cloud-native methodologies and technologies used for a breadth of both internal and consumer-facing applications, led by data services and analytics applications, IT optimization and automation, digital user enhancement, and industry-specific software. 

The Spectrum of Abstraction

Contrary to the narrative that ‘serverless is killing containers,’ we don’t see the different approaches and technologies within cloud-native technology competing with or eliminating one another. The same way that containers are living alongside, and sometimes inside of, VMs is indicative of how all of the different aspects of cloud-native will coexist in a mixed-use market. No, serverless is not killing containers; serverless is built on containers. The main distinction between the two is the level of abstraction provided to the end user. Thus, we can also describe cloud-native as a set of technologies that fall somewhere on what we call the Spectrum of Abstraction.  

Figure 2 

Source: 451 Research, LLC 

spectrum of abstraction graphic

On one side of this spectrum is the DIY containers approach, whereby organizations leverage custom code and services and make their own choices on languages, frameworks and APIs. This approach is attractive for certain applications that require low latency, that run longer compute jobs, and for which high traffic can be predicted. On the other end of the spectrum, as functionality becomes more abstracted and invisible, are serverless functions and events, for which there are standardized and opinionated choices that are abstracted away from the end user. In between these two ends of the spectrum are still other levels of abstraction, such as supported Kubernetes distributions and container-as-a-service offerings from the large public cloud vendors and others. 

We typically see these different cloud-native technologies adopted in a specific order, starting with containers used for microservices, which break applications into smaller, loosely coupled services; then Kubernetes container orchestration and distributed application management for container clusters; followed by service mesh to abstract for developers and serverless to abstract for IT operators. However, we do see mixed use of the different approaches, and a leap whereby interested customers can skip ahead is feasible. For example, overheard at Kubecon/CloudNativeCon 2018 was the idea that organizations might be able to skip containers, microservices and the complexity of Kubernetes by simply adopting serverless. The reality is not that simple for most enterprise and service-provider organizations, which are more likely to be using the different technologies concurrently. 

There is some interesting tension between different approaches that are still playing out in the marketplace – for example, advocates of ‘single platform’ approaches to cloud-native, such as OpenStack, Pivotal/Cloud Foundry or Red Hat, versus loosely coupled models that will be composed of different coordinated parts. Both require a specific organizational model, and the success or otherwise of each has yet to be determined – enterprises are still undergoing transformation. 

Cloud-native isn't only in the cloud

Cloud-native does not necessarily mean applications run only on private or public cloud infrastructure. The hybrid cloud trend, which entails the use of a mix of public and private clouds with on-premises environments, dictates that enterprises will seek to run cloud-native applications atop on-premises infrastructure, as well. Vendors have responded aggressively with offerings such as Azure Software Stack, GKE on Prem and AWS Outposts. PaaS vendors, such as Red Hat with OpenShift or Pivotal with PCF, have also focused on the ability to run applications consistently across public clouds and on-premises infrastructure. In fact, our recent Voice of the Enterprise: Servers and Converged Infrastructure, Vendor Evaluations 2018 survey indicates continued growth of x86 servers and on-premises environments, with nearly one-third of organizations anticipating an increase in their x86 server deployments in the coming year. 

Further evidence of the ties between cloud-native and hybrid cloud can be found in our Voice of the Enterprise: Cloud, Hosting and Managed Services, Workloads and Key Projects 2018 survey, which indicates that most cloud-native software (32%) is designed to run effectively on any cloud environment, with another 22% designed to run effectively on any public cloud environment, rather than for a specific public cloud (30%) or private cloud (17%). 

Cloud-native with adjacent trends/sectors

Data, AI and ML 

The dynamism that cloud-native architecture and containers provide is ideal for stateless web applications, but it can be problematic for stateful database workloads, given the need for a persistent connection between the application and its associated data volume. Kubernetes, in particular, has been at the forefront of containerization of stateful services, providing elements for persistence and cluster lifecycle management that enable custom deployments for individual databases that could be the beginning of a viable long-term approach. Database vendors are beginning to update their products to take advantage of these features. However, inherent challenges remain in getting databases and containers to work together, and vendors, enterprises and industry consortia must work together to continue to evolve Kubernetes, in order to provide a general-purpose environment for the containerization of multiple stateful services. 

Cloud-native methodology and software are also crossing over with artificial intelligence and machine learning, including integrations of TensorFlow, an open source machine learning library, and projects such as Kubeflow for machine learning on Kubernetes. The combination enables data scientists to create and train models in self-contained environments with the necessary data and dependencies; these can then be deployed into production via Kubernetes, which provides autoscaling, failover, and infrastructure monitoring and management, as well as exaction venue abstraction. 

Security 

Increased adoption of cloud-native technology and delivery patterns will deeply influence how organizations think about security, even as key security principles, such as the need to maintain confidentiality, integrity and availability, remain. The scope of changes will affect both security technology and practices. On the technology front, the key cloud-native technologies (containers, Kubernetes, service mesh and others) have incorporated some security functionality themselves – service mesh supports workload identity and encryption, while Kubernetes includes several policy constructs. This will affect organizations deploying these technologies, as well as vendor offerings, since that functionality becomes the reference point for additional functionality and design decisions. Particularly as organizations adopt high-level services and abstractions (containers as a service and serverless), the focus of security shifts much more to application-level security and data security. This is a shift away from traditional infrastructure security considerations. Lastly, the quickened pace associated with cloud-native deployments will deeply affect security teams – not only will they need to skill up in cloud-native technologies and patterns, but the very pace of deployment will require teams to rethink how they interact with the rest of IT, and what role security can actually play. 

IoT and edge computing 

While the timing of their arrival on the IT scene was coincidental, it’s as though containerization and IoT were born to be together as the match between capability (containers) and need (IoT app developers). The trends are well-aligned as the IoT industry matures, scales, and requires a complicated tapestry of computing venues depending on context and use case. 

We believe the successful future of IoT is linked with timely adoption of cloud-native techniques to support the speed and diversity of IoT apps. The reality that a nontrivial portion of IoT apps will actually fail means reducing the cost of doing so is a high priority, and there is a need for iterative updates to software based on feedback from ‘the field.’ There is also a requirement for a small operating system footprint for low-power edge devices; support for microservices to enable the data- and messaging-intensive characteristics of IoT across and within multiple actors; and platform-independent runtime support using container technologies and orchestration to ensure that workloads are run on the optimal computing platform at the edge, near edge or centralized core. 

Networking 

There are still significant challenges to cloud-native networking, whether in a cloud service, an on-premises or colocation cloud environment, a virtual machine-based cloud, a container-based cloud, or a mix of services and on-premises. Enterprise IT prefers consistency in capabilities, but cloud-native environments have basic networking capabilities that established networking vendors have been attempting to address by integrating their switch and management software with the container environment and the container management framework. These products unify networking workflows and are familiar to IT, but can also inhibit IT from moving past its traditionally managed infrastructure, which is rigid and slow to adapt to changes. Layer on top service mesh, which offers a more robust technology for cloud-native infrastructure and provides a useful abstraction between application connectivity and the physical or virtual paths interconnecting software and hardware, and much of the intelligent networking capabilities in the physical underlay become irrelevant in the application layer. 

There are opportunities for application delivery controller (ADC) vendors that can deeply embed themselves into enterprise IT by offering to offload a number of critical capabilities from application owners, such as intelligent load-balancing, high availability and security functions, to purpose-built platforms that can augment applications and keep developers focused on building features versus infrastructure. ADC vendors are also finding ways to embed their products into application infrastructure by enabling scale-out architecture via robust APIs and replacing container environment components like the ingress controller to a container pod. 

Storage 

There is a shift in how storage is being run as both startups and established vendors offer more storage capabilities (ranging from the storage controller to the backup application) in containers. The alternative is to have them run in VMs, as one would find in HCI-style deployments, or on a dedicated operating system like in proprietary appliances. This brings new flexibility to storage management since the various capabilities of storage platforms can be orchestrated and automated using the same tooling as the applications they are supporting. 

Another consideration in the storage industry is providing containerized applications with storage as vendors evolve their offerings to take into account Docker volume drivers and Kubernetes Container Storage Interface drivers to support flexible storage consumption for containerized, stateful applications. This will be increasingly important as containers are used for stateful applications, whether they are net new or traditional and legacy apps that are being containerized for use in the cloud. 

Heavily open source

Considering the most successful software components of cloud-native, open source software is a critical part of the trend. Nearly all cloud-native software components are open source, including Docker containers, Kubernetes management and orchestration, Helm package management, Prometheus monitoring, Istio service mesh, and Knative serverless. It is also noteworthy in the context of cloud-native that modern open source software projects and communities include not only vendors, but also end users, which are among project supporters and sponsors in the cloud-native market. The open source nature of cloud-native also means that traditional rivals, such as Microsoft and Google or Pivotal and Red Hat, work together on many of these open source projects in the cloud-native ecosystem. Cloud-native is also all about collaboration, meaning it must accommodate DevOps by offering something for developers and IT operators, as well as other stakeholders, including security teams, data analytics and data science teams, and line-of-business leaders. 

Cloud-native competition and outlook

The industry is moving toward containers, microservices, Kubernetes, serverless and other cloud-native constructs. While there are other flavors available, Kubernetes has the wind in its sails and has all but won the battle for container orchestration. Many cloud-native entrants have a ‘Kubernetes first’ posture in terms of platform architecture and service delivery. Incumbent vendors, service providers and integrators are rewriting and retooling for cloud-native. Cloud-native is a part of every conversation with customers. Most enterprises are already working at some level with cloud-native constructs and exploring what new outcomes can be achieved. Every company is becoming a service provider – seeking to better engage with customers, partners, and suppliers with new digital services and experiences, and to compete in the digital economy. Companies will need to raise their software IQ, and cloud-native will be the basis of this, supported on the cloud operating and delivery model. Cloud-native practices such as CI/CD enable companies to access speed and agility not previously available, and will require new organizational approaches to development. 

With many vendors across the different subsegments (containers, Kubernetes, service mesh and serverless), we expect further consolidation of the market. The need for cloud-native talent and expertise – our VotE survey data indicates cloud functions/tools such as containers and microservices are among the most acute skills shortages – will also likely drive mergers and acquisitions in the space. However, it may take some time since different enterprise and service-provider customers have very different needs, and thus support a broad array of providers in the market. The cloud-native market is highly competitive, with no dominant player yet established, although the hyperscale public cloud providers and large vendors that embraced containers early on are the clear leaders. 

We also expect that, driven largely by digital transformation and the need to embrace and leverage new technology, cloud-native approaches will more deeply permeate large enterprise organizations. Similar to the DevOps trend, this means increasingly pulling in additional stakeholders, including administrators and line-of-business leaders. This means cloud-native technology and methodology will probably follow the pattern of agile and DevOps to reach half or more of organizations within the next few years. It is also important to note that the concept of cloud-native was meant to mean more than containers, Kubernetes or serverless, leaving room for the next technology, which may be a combination of existing ones; integration with adjacent trends, such as DevSecOps, data analytics, AI and ML; or something currently unknown. 

 



doubling down on istio

Doubling Down On Istio

Good startups believe deeply that something is true about the future, and organize around it.

When we founded Aspen Mesh as a startup inside of F5, my co-founders and I believed these things about the future:

  1. App developers would accelerate their pace of innovation by modularizing and building APIs between modules packaged in containers.
  2. Kubernetes APIs would become the lingua franca for describing app and infrastructure deployments and Kubernetes would be the best platform for those APIs.
  3. The most important requirement for accelerating is to preserve control without hindering modularity, and that’s best accomplished as close to the app as possible.

We built Aspen Mesh to address item 3. If you boil down reams of pitch decks, board-of-directors updates, marketing and design docs dating back to summer of 2017, that's it. That's what we believe, and I still think we're right.

Aspen Mesh is a service mesh company, and the lowest levels of our product are the open-source service mesh Istio. Istio has plenty of fans and detractors; there are plenty of legitimate gripes and more than a fair share of uncertainty and doubt (as is the case with most emerging technologies). With that in mind, I want to share why we selected Istio and Envoy for Aspen Mesh, and why we believe more strongly than ever that they're the best foundation to build on.

 

Why a service mesh at all?

A service mesh is about connecting microservices. The acceleration we're talking about relies on applications that are built out of small units (predominantly containers) that can be developed and owned by a single team. Stitching these units into an overall application requires APIs between them. APIs are the contract. Service Mesh measures and assists contract compliance. 

There's more to it than reading the 12-factor app. All these microservices have to effectively communicate to actually solve a user's problem. Communication over HTTP APIs is well supported in every language and environment so it has never been easier to get started.  However, don't let the simplicity delude: you are now building a distributed system. 

We don't believe the right approach is to demand deep networking and infrastructure expertise from everyone who wants to write a line of code.  You trade away the acceleration enabled by containers for an endless stream of low-level networking challenges (as much as we love that stuff, our users do not). Instead, you should preserve control by packaging all that expertise into a technology that lives as close to the application as possible. For Kubernetes-based applications, this is a common communication enhancement layer called a service mesh.

How close can you get? Today, we see users having the most success with Istio's sidecar container model. We forecasted that in 2017, but we believe the concept ("common enhancement near the app") will outlive the technical details.

This common layer should observe all the communication the app is making; it should secure that communication and it should handle the burdens of discovery, routing, version translation and general interoperability. The service mesh simplifies and creates uniformity: there's one metric for "HTTP 200 OK rate", and it's measured, normalized and stored the same way for every app. Your app teams don't have to write that code over and over again, and they don't have to become experts in retry storms or circuit breakers. Your app teams are unburdened of infrastructure concerns so they can focus on the business problem that needs solving.  This is true whether they write their apps in Ruby, Python, node.js, Go, Java or anything else.

That's what a service mesh is: a communication enhancement layer that lives as close to your microservice as possible, providing a common approach to controlling communication over APIs.

 

Why Istio?

Just because you need a service mesh to secure and connect your microservices doesn't mean Envoy and Istio are the only choice.  There are many options in the market when it comes to service mesh, and the market still seems to be expanding rather than contracting. Even with all the choices out there, we still think Istio and Envoy are the best choice.  Here's why.

We launched Aspen Mesh after learning some lessons with a precursor product. We took what we learned, re-evaluated some of our assumptions and reconsidered the biggest problems development teams using containers were facing. It was clear that users didn't have a handle on managing the traffic between microservices and saw there weren't many using microservices in earnest yet so we realized this problem would get more urgent as microservices adoption increased. 

So, in 2017 we asked what would characterize the technology that solved that problem?

We compared our own nascent work with other purpose-built meshes like Linkerd (in the 1.0 Scala-based implementation days) and Istio, and non-mesh proxies like NGINX and HAProxy. This was long before service mesh options like Consul, Maesh, Kuma and OSM existed. Here's what we thought was important:

  • Kubernetes First: Kubernetes is the best place to position a service mesh close to your microservice. The architecture should support VMs, but it should serve Kubernetes first.
  • Sidecar "bookend" Proxy First: To truly offload responsibility to the mesh, you need a datapath element as close as possible to the client and server.
  • Kubernetes-style APIs are Key: Configuration APIs are a key cost for users.  Human engineering time is expensive. Organizations are judicious about what APIs they ask their teams to learn. We believe Kubernetes API design and mechanics got it right. If your mesh is deployed in Kubernetes, your API needs to look and feel like Kubernetes.
  • Open Source Fundamentals: Customers will want to know that they are putting sustainable and durable technology at the core of their architecture. They don't want a technical dead-end. A vibrant open source community ensures this via public roadmaps, collaboration, public security audits and source code transparency.
  • Latency and Efficiency: These are performance keys that are more important than total throughput for modern applications.

As I look back at our documented thoughts, I see other concerns, too (p99 latency in languages with dynamic memory management, layer 7 programmability). But the above were the key items that we were willing to bet on. So it became clear that we had to palace our bet on Istio and Envoy. 

Today, most of that list seems obvious. But in 2017, Kubernetes hadn’t quite won. We were still supporting customers on Mesos and Docker Datacenter. The need for service mesh as a technology pattern was becoming more obvious, but back then Istio was novel - not mainstream. 

I'm feeling very good about our bets on Istio and Envoy. There have been growing pains to be sure. When I survey the state of these projects now, I see mature, but not stagnant, open source communities.  There's a plethora of service mesh choices, so the pattern is established.  Moreover the continued prevalence of Istio, even with so many other choices, convinces me that we got that part right.

 

But what about...?

While Istio and Envoy are a great fit for all those bullets, there are certainly additional considerations. As with most concerns in a nascent market, some are legitimate and some are merely noise. I'd like to address some of the most common that I hear from conversations with users.

"I hear the control plane is too complex" - We hear this one often. It’s largely a remnant of past versions of Istio that have been re-architected to provide something much simpler, but there's always more to do. We're always trying to simplify. The two major public steps that Istio has taken to remedy this include removing standalone Mixer, and co-locating several control plane functions into a single container named istiod.

However, there's some stuff going on behind the curtains that doesn't get enough attention. Kubernetes makes it easy to deploy multiple containers. Personally, I suspect the root of this complaint wasn't so much "there are four running containers when I install" but "Every time I upgrade or configure this thing, I have to know way too many details."  And that is fixed by attention to quality and user-focus. Istio has made enormous strides in this area. 

"Too many CRDs" - We've never had an actual user of ours take issue with a CRD count (the set of API objects it's possible to define). However, it's great to minimize the number of API objects you may have to touch to get your application running. Stealing a paraphrasing of Einstein, we want to make it as simple as possible, but no simpler. The reality: Istio drastically reduced the CRD count with new telemetry integration models (from "dozens" down to 23, with only a handful involved in routine app policies). And Aspen Mesh offers a take on making it even simpler with features like SecureIngress that map CRDs to personas - each persona only needs to touch 1 custom resource to expose an app via the service mesh.

"Envoy is a resource hog" - Performance measurement is a delicate art. The first thing to check is that wherever you're getting your info from has properly configured the system-under-measurement.  Istio provides careful advice and their own measurements here.  Expect latency additions in the single-digit-millisecond range, knowing that you can opt parts of your application out that can't tolerate even that. Also remember that Envoy is doing work, so some CPU and memory consumption should be considered a shift or offload rather than an addition. Most recent versions of Istio do not have significantly more overhead than other service meshes, but Istio does provide twice as many feature, while also being available in or integrating with many more tools and products in the market. 

"Istio is only for really complicated apps” - Sure. Don’t use Istio if you are only concerned with a single cluster and want to offload one thing to the service mesh. People move to Kubernetes specifically because they want to run several different things. If you've got a Money-Making-Monolith, it makes sense to leave it right where it is in a lot of cases. There are also situations where ingress or an API gateway is all you need. But if you've got multiple apps, multiple clusters or multiple app teams then Kubernetes is a great fit, and so is a service mesh, especially as you start to run things at greater scale.

In scenarios where you need a service mesh, it makes sense to use the service mesh that gives you a full suite of features. A nice thing about Istio is you can consume it piecemeal - it does not have to be implemented all at once. So you only need mTLS and tracing now? Perfect. You can add mTLS and tracing now and have the option to add metrics, canary, traffic shifting, ingress, RBAC, etc. when you need it.

We’re excited to be on the Istio journey and look forward to continuing to work with the open source community and project to continue advancing service mesh adoption and use cases. If you have any particular question I didn’t cover, feel free to reach out to me at @notthatjenkins. And I'm always happy to chat about the best way to get started on or continue with service mesh implementation. 


steering future of istio

Steering The Future Of Istio

I’m honored to have been chosen by the Istio community to serve on the Istio Steering Committee along with Christian Posta, Zack Butcher and Zhonghu Xu. I have been fortunate to contribute to the Istio project for nearly three years and am excited by the huge strides the project has made in solving key challenges that organizations face as they shift to cloud-native architecture. 

Maybe what’s most exciting is the future direction of the project. The core Istio community realizes and advocates that innovation in Open Source doesn't stop with technology - it’s just the starting point. New and innovative ways of growing the community include making contributions easier, Working Group meetings more accessible and community meetings an open platform for end users to give their feedback. As a member of the steering committee, one of my main goals will be to make it easier for a diverse group of people to more easily contribute to the project.

Sharing my personal journey with Istio, when I started contributing to Istio, I found it intimidating to present rough ideas or proposals in an open Networking WG meeting filled with experts and leaders from Google & IBM (even though they were very welcoming). I understand how difficult it can be to get started on contributing to a new community, so I want to ensure the Working Group and community meetings are a place for end users and new contributors to share ideas openly, and also to learn from industry experts. I will focus on increasing participation from diverse groups, through working to make Istio the most welcoming community possible. In this vein, it will be important for the Steering Committee to further define and enforce a code of conduct creating a safe place for all contributors.

The Istio community’s effort towards increasing open governance by ensuring no single organization has control over the future of the project has certainly been a step in the right direction with the new makeup of the steering committee. I look forward to continuing work in this area to make Istio the most open project it can be. 

Outside of code contributions, marketing and brand identity are critically important aspects of any open source project. It will be important to encourage contributions from marketing and business leaders to ensure we recognize non-technical contributions. Addressing this is less straightforward than encouraging and crediting code commits, but a diverse vendor neutral marketing team in Open Source can create powerful ways to reach users and drive adoption, which is critical to the success of any open source project. Recent user empathy sessions and user survey forms are a great starting point, but our ability to put these learning into actions and adapt as a community will be a key driver in growing project participation.

Last, but definitely not least, I’m keen to leverage my experience and feedback from years of work with Aspen Mesh customers and broad enterprise experience to make Istio a more robust and production-ready project. 

In this vein, my fellow Aspen Mesher Jacob Delgado has worked tirelessly for many months contributing to Istio. As a result of his contributions, he has been named a co-lead for the Istio Product Security Working Group. Jacob has been instrumental in championing security best practices for the project and has also helped responsibly remediate several CVEs this year. I’m excited to see more contributors like Jacob make significant improvements to the project.

I'm humbled by the support of the community members who voted in the steering elections and chose such a talented team to shepherd Istio forward. I look forward to working with all the existing, and hopefully many new, members of the Istio community! You can always reach out to me through email, Twitter or Istio Slack for any community, technical or governance matter, or if you just want to chat about a great idea you have.


What Are Companies Using Service Mesh For?

We recently worked with 451 Research to identify current trends in the service mesh space. Together, we identified some key service mesh trends and patterns around how companies are adopting service mesh, and emerging use cases that are driving that adoption. Factors driving adoption include how service mesh automates and bolsters security, and a recognition of service mesh observability capabilities to ease debugging and decrease Mean Time To Resolution (MTTR). Check out this video for more from 451 Research's Senior Analyst in Application and Infrastructure Performance, Nancy Gohring, on this topic:

Who’s Using Service Mesh 

According to data and insights gathered by 451 Research, service mesh already has significant momentum, even though it is a young technology. Results from the Voice of the Enterprise: DevOps, Workloads & Key Projects 2020 survey tell us that 16% of respondents had adopted service mesh across their entire IT organizations, and 20% had adopted service mesh at the team level. Outside of those numbers, 38% of respondents also reported that they are in trials or planning to use service mesh in the future. As Kubernetes dominates the microservices landscape, the need for a service mesh to manage layer 7 communication is becoming increasingly clear. 

451 Research Service Mesh Adoption

In tandem with this growing adoption trend, the technology itself is expanding quickly. While the top driver of service mesh adoption continues to be supporting traffic management, service mesh provides many additional capabilities beyond controlling traffic. 451 found that key new capabilities the technology provides includes greatly enhanced security as well as increased observability into microservices.

Service Mesh and Security

Many organizations—particularly those in highly regulated industries such as healthcare and financial services—need to comply with very demanding security and regulatory requirements. A service mesh can be used to enforce or enhance important security and compliance policies more consistently, and across teams, at an organization-wide level. A service mesh can be used to:

  • Apply security policies to all traffic at ingress, and encrypt traffic using mTLS traveling between services
  • Add Zero-Trust networking
  • Govern certificate management for authenticating identity
  • Enforce level of least privilege with role-based access control (RBAC)
  • Manage policies consistently, regardless of protocols and runtimes 

These capabilities are particularly important for complex microservices deployments, and allow DevOps teams to ensure a strong security posture while running in production at global scale. 

Observability and Turning Your Data into Intelligence

In addition to helping enterprises improve their security posture, a service mesh also greatly improves observability through traces and metrics that allow operators to quickly root cause any failures and ensure resilient applications. Enabling the rapid resolution of performance problems allows DevOps teams to reduce mean time to resolution (MTTR) and optimize engineering efficiency

The broader market trends around observability and advanced analytics with open source technologies are also key to the success of companies adopting service mesh. There are challenges around managing microservices environments, and teams need better ways of identifying the sources of performance issues in order to resolve problems faster and more efficiently. Complex microservices-based applications generate very large amounts of data. Many open source projects are addressing this by making it easier for users to collect data from these environments, and advancements in analytics tools are enabling users to extract the signal from the noise, quickly directing users to the source of performance problems. 

Overcoming this challenge is why we created Aspen Mesh Rapid Resolve. It allows users to see any configuration or policy changes made within Kubernetes clusters, which is almost always the cause of failures. The Rapid Resolve timeline view makes it simple for operators to look back in time to pinpoint any changes that resulted in performance degradation. 

Aspen Mesh Rapid Resolve

This enables Aspen Mesh users to identify root causes, report actions and apply fixing configurations all in one place. For example, the Rapid Resolve suite offers many new features including:

  • Restore: a smarter, machine-assisted way to effectively reduce the set of things an operator or developer has to look through to find the root cause of failure in their environment. Root causing in distributed architectures is hard. Aspen Mesh Restore immediately alerts engineers to any performance outside acceptable thresholds and makes it obvious where any configuration, application or infrastructure changes occurred that are likely to be breaking changes.
  • Replay: a one-stop shop for application troubleshooting and reducing time to recovery. Aspen Mesh Replay gives you the current and the past view of your cluster state, including microservices connectivity, traffic and service health, and relevant events like configuration changes and alerts along the way. This view is great for understanding and diagnosing cascading failures. You can easily roll back in time and detect where a failure started. It's also a good tool for sharing information in larger groups where you can track the health of your cluster visually over time.

The Future of Service Mesh

Companies strive for stability with agility, which allows them to meet the market and users where they are, and thrive even in an uncertain marketplace. According to 451 Research,

“Businesses are employing containers, Kubernetes and microservices as tools that allow them to more quickly respond to customer demands and competitive threats. However, these technologies introduce new and potentially significant management challenges. Advanced organizations have turned to service mesh to help solve some of these problems. Service mesh technology can remove infrastructure burdens from developers, enabling them to focus on creating valuable application features rather than managing the mechanics of microservices communications. But managing the communications layer isn’t the only benefit a service mesh brings to the table. Increasingly, users are recognizing the role service meshes can play in collecting and analyzing important observability data, as well as their ability to support security requirements.”

The adoption of containers, Kubernetes and service mesh is continuing to grow, and both security and observability will be key drivers that increase service mesh adoption in the coming years.

 


digital transformation

Digital Transformation: How Service Mesh Can Help

Your Company’s Digital Transformation

It’s happening everywhere, and it’s happening fast. In order to meet consumers head on in the best, most secure ways, enterprises are jumping on the digital transformation train (check out this Forrester report). 

Several years ago, digital transformations saw companies moving from monolithic architectures towards microservices and Kubernetes, but service mesh was in its infancy. No one knew they'd need something to help manage service-to-service communication. Now, with increasing complexity and demands coupled with thinly-stretched resources or teams without service mesh expertise, supported service mesh is becoming a good solution for many--especially for DevOps teams.

Service Mesh for DevOps

"DevOps" is a term used to describe the business relationship between development and IT operations. Mostly, the term is used when referring to improving communication and collaboration between the two teams. But while Dev is responsible for creating new functionality to drive business, Ops is often the unsung--but extremely important--hero behind the scenes. In IT Ops, you’re on the hook for strategy development, system design and performance, quality control, direction and coordination of your team all while collaborating with the Dev team and other internal stakeholders to achieve your business’s goals and drive profitability. Ultimately, it’s the Dev and Ops teams who are responsibility to ensure that teams are communicating effectively, systems are monitored correctly, high customer satisfaction is achieved and projects and issue resolution are completed on time. A service mesh can help with this by enabling DevOps.

Integrating a Service Mesh: Align with Business Objectives

As you think about adopting a service mesh, keep in mind that your success over time is largely dependent on aligning with your company’s business objectives. Sharing business objectives like these with your service mesh team will help to ensure you get--and keep--the features and capabilities that you really need, when you need them, and that they stay relevant.

What are some of your company’s business objectives? Here are three we’ve identified that a service mesh can help to streamline:

1. Automating More Process (i.e. Removing Toil)
Automating processes frees up your team from mundane tasks so they can focus on more important projects. Automation can save you time and money.

2. Increasing Infrastructure Performance
Building and maintaining a battle-tested environment is key to your end users experience, and therefore churn or customer retention rates and your company’s bottom line.

In addition, much of your time is spent developing strategies to monitor your systems and working through issue resolution as quickly as possible--whether issues pop up during the workday, or in the middle of the night. Fortunately, because service mesh come with observability, security and resilience features, it can help alleviate these responsibilities, decreasing MTTD and MTTR.

3. Maintaining Delivery to Customers
Reducing friction in the user experience is the name of the game these days, so UX and reliability are key to keeping your end users happy. If you’re looking at a service mesh, you’re already using a microservices architecture, and you’re likely using Kubernetes clusters. But once those become too complex in production--or don’t have all the features you need-- it’s time to add a service mesh into the mix. Service mesh’s observability features like cluster health monitoring, service traffic monitoring, easy debugging and root cause identification with distributed tracing help with this. In addition, an intuitive UI is key to surfacing these features in a way that is easy to understand and manipulate, so make sure you’re looking at a service mesh that’s easy for your Dev team to use. This will help provide a more seamless (and secure) experience for your end users.

Evolution; Not Revolution

How do you actually go about approaching the process of integrating a service mesh? What will drive success is for you to have agility and stability. But that can be a tall order, so it can be helpful to approach integrating a service mesh as evolution, rather than revolution. Three key areas to consider while you’re evaluating a service mesh include:

  1. Mitigating risk
  2. Production readiness
  3. Policy frameworks

Mitigating Risk
Risk can be terrifying, so it’s imperative to take steps to ensure that risk is mitigated as much as possible. The only time your company should be making headlines is because of good news. Ensuring security, compliance, and data integrity is the way to go. With security and compliance at top of mind for many, it’s important to address security head on. 

With a well-designed enterprise service mesh, you can expect plenty of security, compliance and policy features so it’s easy for your company to get a zero-trust network. Features can include anything from ensuring the principle of least privilege and secure default settings to technical features such as fine-grained RBAC and incremental mTLS.

Production Readiness
Your applications are ready to be used by your end users, and your technology stack needs to be ready too. What makes a real impact here is reliability. Service mesh features like dynamic request routing, fast retries, configuration vetters, circuit breaking and load balancing greatly increase the resiliency of microservice architectures. Support is also a feature that some enterprises will want to consider in light of whether service mesh expertise is a core in-house skill for the business. Having access to an expert support team can make a tremendous difference in your production readiness and your end users’ experiences.

Policy Frameworks
While configuration is useful for setting up how a system operates, policy is useful in dictating how a system responds when something happens. With a service mesh, the power of policy and configuration combined provides capabilities that can drive outcome-based behavior from your applications. A policy catalog can accelerate this behavior, while analytics examines policy violations and understands the best actions to take. This improves developer productivity with canary, authorization and service availability policies.

How to Measure Service Mesh Success

No plan is complete without a way to measure, iterate and improve your success over time. So how do you go about measuring the success of your service mesh? There are a lot of factors to take into consideration, so it’s a good idea to talk to your service mesh provider in order to leverage their expertise. But in the meantime, there are a few things you can consider to get an idea of how well your service mesh is working for you. Start by finding a good way to measure 1) how your security and compliance is impacted, 2)  how much you’re able to change downtime and 3) differences you see in your efficiency.

Looking for more specific questions to ask? Check out the eBook, Getting the Most Out of Your Service Mesh for ideas on the right questions to ask and what to measure for success.


when need service mesh

When Do You Need A Service Mesh?

When You Need A Service Mesh - Aspen MeshOne of the questions I often hear is: "Do I really need a service mesh?" The honest answer is "It depends." Like nearly everything in the technology space (or more broadly "nearly everything"), this depends on the benefits and costs. But after having helped users progress from exploration to production deployments in many different scenarios, I'm here to share my perspective on which inputs to include in your decision-making process.

A service mesh provides a consistent way to connect, secure and observe microservices. Most service meshes are tightly integrated with an orchestration platform, commonly Kubernetes. There's no way around it; a service mesh is another thing, and at least part of your team will have to learn it. That's a cost, and you should compare that cost to the benefits of operational simplification you may achieve.

But apart from costs and benefits, what should you be asking in order to determine if you really need a service mesh? The number of microservices you’re running, as well as urgency and timing, can have an impact on your needs.

How Many Microservices?

If you're deploying your first or second microservice, I think it is just fine to not have a service mesh. You should, instead, focus on learning Kubernetes and factoring stateless containers out of your applications first. You will naturally build familiarity with the problems that a service mesh can solve, and that will make you much better prepared to plan your service mesh journey when the time comes.

If you have an existing application architecture that provides the observability, security and resilience that you need, then you are already in a good place. For you, the question becomes when to add a service mesh. We usually see organizations notice the toil associated with utility code to integrate each new microservice. Once that toil gets painful enough, they evaluate how they could make that integration more efficient. We advocate using a service mesh to reduce this toil.

The exact point at which service mesh benefits clearly outweigh costs varies from organization to organization. In my experience, teams often realize they need a consistent approach once they have five or six microservices. However, many users push to a dozen or more microservices before they notice the increasing cost of utility code and the increasing complexity of slight differences across their applications. And, of course, some organizations continue scaling and never choose a service mesh at all, investing in application libraries and tooling instead. On the other hand, we also work with early birds that want to get ahead of the rising complexity wave and introduce service mesh before they've got half-a-dozen microservices. But the number of microservices you have isn’t the only part to consider. You’ll also want to consider urgency and timing. 

Urgency and Timing

Another part of the answer to “When do I need a service mesh?” includes your timing. The urgency of considering a service mesh depends on your organization’s challenges and goals, but can also be considered by your current process or state of operations. Here are some states that may reduce or eliminate your urgency to use a service mesh:

  1. Your microservices are all written in one language ("monoglot") by developers in your organization, building from a common framework.
  2. Your organization dedicates engineers to building and maintaining org-specific tooling and instrumentation that's automatically built into every new microservice.
  3. You have a partially or totally monolithic architecture where application logic is built into one or two containers instead of several.
  4. You release or upgrade all-at-once after a manual integration process.
  5. You use application protocols that are not served by existing service meshes (so usually not HTTP, HTTP/2, gRPC).

On the other hand, here are some signals that you will need a service mesh and may want to start evaluating or adopting early:

  1. You have microservices written in many different languages that may not follow a common architectural pattern or framework (or you're in the middle of a language/framework migration).
  2. You're integrating third-party code or interoperating with teams that are a bit more distant (for example, across a partnership or M&A boundary) and you want a common foundation to build on.
  3. Your organization keeps "re-solving" problems, especially in the utility code (my favorite example: certificate rotation, while important, is no scrum team's favorite story in the backlog).
  4. You have robust security, compliance or auditability requirements that span services.
  5. Your teams spend more time localizing or understanding a problem than fixing it.

I consider this last point the three-alarm fire that you need a service mesh, and it's a good way to return to the quest for simplification. When an application is failing to deliver a quality experience to its users, how does your team resolve it? We work with organizations that report that finding the problem is often the hardest and most expensive part. 

What Next?

Once you've localized the problem, can you alleviate or resolve it? It's a painful situation if the only fix is to develop new code or rebuild containers under pressure. That's where you see the benefit from keeping resiliency capabilities independent of the business logic (like in a service mesh).

If this story is familiar to you, you may need a service mesh right now. If you're getting by with your existing approach, that’s great. Just keep in mind the costs and benefits of what you’re working with, and keep asking:

  1. Is what you have right now really enough, or are spending too much time trying to find problems instead of developing and providing value for your customers?
  2. Are your operations working well with the number of microservices you have, or is it time to simplify?
  3. Do you have critical problems that a service mesh would address?

Keeping tabs on the answers to these questions will help you determine if — and when — you really need a service mesh.

In the meantime if you're interested in learning more about service mesh, check out The Complete Guide to Service Mesh.


Cloud Native 5G illustration Pat Kay

To Multicluster, or Not to Multicluster: Solving Kubernetes Multicluster Challenges with a Service Mesh

If you are going to be running multiple clusters for dev and organizational reasons, it’s important to understand your requirements and decide whether you want to connect these in a multicluster environment and, if so, to understand various approaches and associated tradeoffs with each option.

Kubernetes has become the container orchestration standard, and many organizations are currently running multiples clusters. But while communication issues within clusters are largely solved, communication across clusters is still a major challenge for most organizations.

Service mesh helps to address multicluster challenges. Start by identifying what you want, then shift to how to get it. We recommend understanding your specific communication use case, identifying your goals, then creating an implementation plan.

Multicluster offers a number of benefits:

  • Single pane of glass
  • Unified trust domain
  • Independent fault domains
  • Intercluster traffic
  • Heterogenous/non-flat network

Which can be achieved with various approaches:

  • Independent clusters
  • Common management
  • Cluster-aware service routing through gateways
  • Flat network
  • Split-horizon Endpoints Discovery Service (EDS)

If you have decided to multicluster, your next move is deciding the best implementation method and approach for your organization. A service mesh like Istio can help, and when used properly can make multicluster communication painless.

Read the full article here on InfoQ’s site.