RESOURCES
When You Need (Or Don’t Need) Service Mesh w/ B. Cameron Gain

Service Mesh Knowledge Hub

Browse articles, white papers and case studies created for business leaders and technologists.
When You Need (Or Don’t Need) Service Mesh w/ B. Cameron Gain
The New Stack Makers Podcast
When You Need (Or Don’t Need) Service Mesh
The adoption of a service mesh is increasingly seen as an essential building block for any organization that has opted to make the shift to a Kubernetes platform. As a service mesh offers observability, connectivity and security checks for microservices management, the underlying capabilities — and development — of Istio is a critical component in its operation, and eventually, standardization.
In the second of The New Stack Makers three-part podcast series featuring Aspen Mesh, The New Stack correspondent B. Cameron Gain opens the discussion about what service mesh really does and how it is a technology pattern for use with Kubernetes. Joining in the conversation are Zack Butcher, founding engineer, Tetrate and Andrew Jenkins, co-founder and CTO, Aspen Mesh, who also cover how service mesh, and especially Istio, help teams get more out of containers and Kubernetes across the whole application life cycle.
Voiceover: Hello, welcome to The New Stack Makers, a podcast where we talk about at-scale application development, deployment and management.
Voiceover: Aspen Mesh provides a simpler and more powerful distribution of Istio through a service mesh policy framework, a simpler user experience delivered through the Aspen Mesh UI and a fully supported, tested and hardened distribution of Istio that makes it viable to operate service mesh in the enterprise.
Bruce Cameron Gain: Hi, it’s B. Cameron Gain Of The New Stack. Today, we’re going to speak about making Kubernetes more efficient with a service mesh. And this is part of our Aspen Mesh three part Makers series. Today, I’m here with Zack Butcher, co-founding engineer of Tetrate, and Andrew Jenkins, co-founder and CTO of Aspen Mesh. Thank you for joining us.
Zack Butcher: Thanks for thanks for having me.
Bruce Cameron Gain: So the adoption of a service mesh is really increasingly seen as an essential building block for any organization that has opted to make the shift to a Kubernetes platform. So as these service mesh offerings provide observability, connectivity and security checks, et cetera, for microservices management, I want to look at the underlying capabilities and development of Istio specifically and service meshes in general and how they are a critical component in operations of Kubernetes deployments. So, Andrew, could you please put service mesh in the context you have? An organization might use it to migrate to a cloud native environment. What do they need to know?
Andrew Jenkins: Yeah, so the migration to cloud data for organizations that we work with is always kind of involves a couple of steps along the way. So there is kind of an end state goal that you want to have microservices to unlock developer efficiency by having developers and people able to move fast on smaller components that are all stitched up into an integrated experience for users. But you have to get there from here, from wherever you are. And so we find that organizations use service mesh a lot to help out with that evolutionary path. So that involves taking where we are now, moving some pieces into more of the cloud native model and developing new cloud native components, but without leaving behind everything that you’ve already already done. And of course, like you talked about, it’s really important to be able to have a good understanding, observability, of all of these different components of the system, be able to know that it’s secure to, be able to connect all these pieces together, whether they’re all in public clouds, on-prem, or different clouds. And so a service mesh can really help with all that connectivity, the security, all those components there. That’s why we see organizations latching on to service mesh as an answer for not just the deployment problem, but how do you integrate all these pieces together?
Bruce Cameron Gain: Well thank you. Zack, this is kind of a reflection, maybe of what [Andrew] just described, but as you know. Migrations are happening, it’s not greenfield. It’s very rare, so as [Andrew] described, they’re moving from data centers to cloud native environments, for example, and they’re doing it, doing it in bits and pieces. So as they’re doing this, they’re doing it, I would imagine, most often in incremental steps and often different cloud environments as well. And so what do they need to know as far as the operations go and how will the service mesh come into play for these multi-cloud deployments? Is it possible just that one Istio or just one platform service mesh which will take care of everything? Or I would imagine they’re off – they’re piecing together what when they get there, we’ll get to where we’ll just have one service mesh interface.
Zack Butcher: Yeah. So I think this is this idea of the transition and the migration that Andrew touched on is really, really relevant right now. And in fact, this is really kind of what, this was the original reason I left Google working on Istio to help start Tetrate. Right. We were talking when we were Google about we were trying to get the initial Istio users, right. And we were trying to understand we had built out the project for like a year and we were running around trying to get the initial users. What we heard consistently was, hey, this is great, but I don’t only have Kubernetes. And I think it’s important that we understand that, data centers aren’t going away any time soon. Right. When you go out and build your own data center, that’s a 50 year investment, right? 40, 50 years easily that you’re expecting to get value out of that. That’s not going to go away tomorrow just because you’re moving to cloud and you will have to split infrastructure for a long time and it will be the norm and it will be increasingly the norm to have this kind of split infrastructure between things that you own, different cloud providers that that fit different use cases well. And that’s exactly where we see the need for a common substrate. Right. How do you start to enable communication of those components that need to communicate across these different environments? Right. That’s where the identity and security aspects of a mesh come in. Right. How do you enforce from an organizational perspective? I have regulatory controls. Right. And I need to ensure that I have controls in place across all of my environments that are consistent and that I can prove to an auditor are consistent in that are enforced across all of these environments. A service mesh because of the centralized control, because of the consistency that it gives you, is incredibly useful for helping bring sanity to kind of the craziness that is the split infrastructure world there, this kind of multi-cloud on-prem world.
Bruce Cameron Gain: Well, without mentioning any names, some providers are making a claim that maybe not now, but very shortly you can just have one single service mesh interface for multiple cloud environments and including your data center as well. How far are we away from that scenario?
Zack Butcher: Yeah, I think I think it depends on what you mean by by interface. Right. Are we going to get into a world where there’s a common point where me as a developer can go and push configuration about how I want my application to behave on the network, and that is going to go and be realized across all of the physical infrastructure that my company has. Right. No matter where my service is running, if that’s what we mean when we say there’s going to be a common interface. Yes, I 100 percent think that we are going to land at a world like that. Right. Where individual developers need to stop thinking about individual infrastructure, need to stop thinking about individual clusters, because that’s not really what’s relevant for me, shipping business value to my users. And instead, I want to be able to think about the services that I that I have and need to maintain and how I want them to behave. And in the features and functionality that they have right. Now fundamentally more value focused world.
Bruce Cameron Gain: Andrew, do you agree? And at the same time, if you do agree, when could that scenario happen?
Andrew Jenkins: I think I strongly agree that the developer should be out of the business of worrying about this interface. And that I think we’ll see, we already see a lot of commonality, even across different service mesh implementations for the features and especially kind of around Kubernetes as a touchpoint for organizing policy and all these sorts of things. But just exactly the same. It’s really important that my organization can guarantee to my security folks that we are meeting our security rules that we’ve set up internally and that maybe what the organization wants to do is not give developers the one common underlying interface that will unify all service meshes. It may want to give a kind of profile set like we’ve already designed these application bundles to talk about applications this way or that way. And we know we can understand those. How those map for security requirements and so in that kind of world, organizations are building what they want to show to their developers on top of the underlying capabilities of the infrastructure. They’re making some choices, kind of just like Zack said so that individual developers who may not be experts on every single layer, they can take advantage of the experts in the organization who have thought about those things and mapped them back to requirements.
Andrew Jenkins: So I don’t think that external, like, sort of stamped out, forced adherence is very successful in kind of the Kubernetes cloud-native world. So I think what you see is kind of bubbling up of the things that are really common into a couple of interfaces. And there’s already efforts underway for some of these sorts of things. And then they’ll be parts of these interfaces that people like and there’ll be kind of gravity around that and they’ll solidify. And that’s kind of, I think, happening a little bit. I think in the next year or so, you’ll see that happen much more strongly around applications and how they interact with things like service meshes. And then I think a few years from now, big organizations will have their own takes on these. They’ll be built in. And if I walk in the door of organization A on day one, I’ll know where to go to get the catalog that describes my application and I’ll just run with it and rely on the experts underneath, both in my own organization and out there in the community that sort of map that to the actual implementation underneath.
Bruce Cameron Gain: Well, thanks, Andrew, and I definitely want to revisit that topic more specifically. But as you remove the development layer, as far as the operations folks go, when do you think we might reach that stage where operations staff for, A for example, just has to manage maybe one sole or one panel or interface to deal with the different deployments out there? I mean, as you just mentioned, the developer should not have to worry about that infrastructure aspect of things. But how far are we away from that day where the operations folks are able to just streamline everything now to a single pane, so to speak, and working with that service mesh type of interface when they can just instantly or near instantaneously deploy and set governance standards and compliance standards, et cetera.
Andrew Jenkins: So there are some organizations that I think are already pretty far down this path with Istio, and Istio has a bunch of great blog posts where users come in and talk about the ways that they’re using it and configuring it. And so there are some organizations that have kind of already built a whole lot of this around Istio. And so I think the thing that we’ll start seeing, though, is that rather than having everybody having to invest a whole lot in service mesh expertise to get that outcome, there will start to be kind of some common best practices, implementation pieces, things from things from vendors and the ecosystem that sort of simplify this so that you don’t – the amount of effort that an organization has to invest to get that benefit will go way down and that will cause the adoption of this to go to increase dramatically. So it now takes investment to do this all yourself. I think that we’ll start to see that become a whole lot more easily adopted into organizations going forward.
Zack Butcher: Yeah, I think that’s spot on right. As far as something like a single pane of glass goes, if you want to look at things like Kiali, for example, is a good example, I think in open source of starting to build that kind of thing out right where Kiali is an open source telemetry system on top of Prometheus, that ships with Istio and gives a set of dashboards and nice visibility. Right. It’s not a single pane of glass and that you can’t do policy control there. It’s only visibility. But that is actively being – I think that’s actively being worked on both by vendors as well as kind of in the community as well. So I think that that is not a very far away world. I do think, though, Andrew is exactly correct. When he was talking about standardization and picking interfaces as a community? The simple matter is like we’re still this is still very early days for the mesh. Right? These are still we are still learning and developing best practices and we are exactly doing exactly what interests that. Right. As a community together. I think the important things that we start to standardize now are not necessarily APIs and interfaces, but practices, techniques that are standards for deployments. Those kinds of things, I think are are really what’s needed badly today, where we can look at things like nicer unified interfaces or potential for APIs over top of multiple meshes. That kind of thing, once we better understand what really the APIs that we actually need are right, because it’s still just very early for that kind of thing.
Bruce Cameron Gain: Indeed. And at the same time and this was one of the questions, actually. So actually, I’m going to rephrase the question. Just formulated another question with you based on your answer. And it is indeed in the early days of service mesh, and Kubernetes actually, and you have the main features, observability security, obviously, traffic management. But to rephrase my original question is what’s not the most important necessarily out of those three features, but which of those features needs the most work? And actually security always needs work. So maybe where do we really need to see some improvements in observability or traffic management or both?
Andrew Jenkins: You know, there’s always room for improvements in both, and I think especially in Istio, my feeling is there’s this sort of stable foundation and a lot of room for innovation and some on top of that, including some infrastructure Istio implementation advancements, that makes that iteration easier to do more rapidly so that we can make progress on a lot of different fronts. I’ll tell you that when I was thinking about what was most important, I’ll tell you that in the early days I was totally wrong in that my thinking was that traffic management was going to be really like the very biggest conscious, spectacular feature coming out of a service mesh. And what I found in the early days for sure was that people needed that observability foundation first to even understand all of these cool new pieces that they had deployed in the cloud, how they’re interacting, even going back on like the security front, what’s talking to what, is it secured. What does it map back to in my security policy, they needed that way before they could start thinking about cool novel ways of doing experimental deployments, canary deployments, progressive delivery. So I think that though there’s been a lot of progress on observability and there’s a lot of foundational work, so I don’t know if it’s the most important, but I bet you that in the near future we’ll see a lot more emphasis on the cool things that you could do with advanced kind of traffic management.
Zack Butcher: Yeah, yeah. No, I think Andrew is pretty spot on there. Again, with respect to what is the biggest thing to improve, I think we always need to look at what does it take? What do what do I an operator or me, an application developer need to do to start to actually realize value from Istio in my environment? Right. And so that cycle, that that time, you know, what configuration do I have to learn? What configuration do I have to write? What ways can we remove that configuration? That’s a big thing to to improve. In my mind, looking forward at the project is exactly like you said, we have a very solid base in terms of the capabilities that exist in the system today. Then kind of the answer to your question around what is the most important of those kind of three pillars. And in my mind, there’s kind of two answers to this. So one of the answers is that none of the three, because it’s kind of the one of the key value adds of the mesh that it brings all three of these together. Right. And Andrew kind of alluded to that some in his answer, like you need the observability to be able to see and understand what’s happening with the traffic, to be able to get a handle on the security in your system. They all kind of go together in some way.
Zack Butcher: Right. I can say from the perspective of some of the people that I worked with the most important and some of the companies that we work with, some of the most important features for them are the security, side of the house, because we work in a lot of financial industry, I should say. And so for them, when we look at kind of the it’s still early days. It’s still kind of expensive to adopt a mesh. And they’re in for in their world, it was security was the killer thing that was most important that that gave them value that warranted adopting the mesh. So I think as well, it’s a little hard to say because it’s going to depend on your specific set of use cases. I totally agree with what Andrew says, over time, I think traffic will grow into one of the most important pieces because the observability and the security parts are really kind of table stakes. You have to have those and they need to be present in your system and they need to be configured correctly. And that gives you the insight into what’s happening and that gives you the assurance that you have control over the system. And then the traffic part is really what application developers start to deal with day to day.
Bruce Cameron Gain: How is this analogy and please be honest, if you don’t agree that it’s applicable. But I’m just thinking speaking of that, I was thinking about of a very high performance car, say, a Tesla, for example. And you have obviously extremely high levels of torque and speed, that’s one component. You have the the user interface, so you have this magnificent screen, the middle. I don’t know if you’ve seen this or not. It’s beautiful. And then you have as well the driverless capabilities. And then the third component, obviously, is security and certain ways to keep you safe. So either of those three once negated or one stop working properly. That’s just not going to offer a proper driving experience in a Tesla. And for me, I see that analogy with the service mesh.
Zack Butcher: Exactly. Exactly. Yeah, I think that’s a really apt analogy. Right. They really work best in concert and they make sense in concert. You know, there have been these three verticals have existed for since computing has existed. Right. And these have been separate spaces. And there are people that have really compelling products and can do really interesting things in each of the spaces of observability, of application security and application traffic management. But the real kind of game changer in my mind with a mesh is the way that it brings all of those together under a single centralized consistent control, which is that kind of control plane that gives me the single point to to configure.
Bruce Cameron Gain: Andrew, you mentioned this a while ago in this article you wrote, and it did extremely well. But it’s a subject you guys might not necessarily like to talk about. But in some instances, you don’t need service mesh. And or at the same time. Could you argue, though, in fact, if you’re deploying on a Kubernetes environment, not counting the serverless, but the cloud native environment, especially when you have several of our different cloud environments to manage? Are there instances where you don’t need a service mesh and why?
Andrew Jenkins: So I think there are some I mean, I don’t think it’s all on us to say, hey, everybody absolutely must use this new thing. Right. There are actually problems where you don’t need Kubernetes. And even if you look at like you may not need containers at all or if you look at serverless. Right. There’s another thing beyond that, which is there’s no-code kind of codeless application development where I don’t even write code. Well, you can in some cases and in other cases we’ll know it’s really more suitable to actually write software, right, write code. And so there is always this continuum of kind of what what pieces you need. And it’s definitely not the case that all problems are solved by a service mesh and require a service mesh. Zack talked about, especially in the early days, how the security benefits were really key for some of the users that he was working with to justify the investment in the early days of Istio to adopt Istio and use it. And so where we’re at now is I think that the security benefits of adopting Istio are at least as good, probably even significantly higher for all of those organizations. And hopefully the cost of adoption continues to go down as folks like Tetrate and Aspen Mesh and everybody else works on improving the Istio experience. And so it becomes even easier to adopt. But let’s be honest, it is a thing, service mesh is a thing that you have to understand at least a little bit about. And so there are some problems where you have very few services communicating or you have a very limited amount of ability to insert a service mesh where it may not justify the effort that you’re going to invest in trying to understand or deploy or implement a service mesh. And I think that as the cost of adoption keeps going down, those become fewer. But that doesn’t mean that that will always be the right answer.
Zack Butcher: And if I can just parlay off that, I think Andrew is exactly correct. And even what we’ve seen and will continue to see more and more is that even within a single organization, there will be use cases that do not fit the mesh. Right. So I was talking a little bit ago with a company that does a lot of video streaming and for video streaming, a mesh doesn’t provide them very much benefit, but it adds latency in their critical path. It gives them negative things on that side of the house. However, they have a whole API side of the house, too, that where people go and interact with their products and stuff like that, where a mesh does make sense. Right. And so even within the context of a single organization, you’re going to see sets of applications or sets of use cases where it may or may not make sense as well even. And that extends out to the entire organization.
Bruce Cameron Gain: And regardless, I’m supposing that in most cases, whether an organization presents – taking the video streaming company – for the developer, that really doesn’t matter that much. I mean, whether they’re not worrying about files, for example, don’t worry about YAML, etc., don’t worry about service mesures, whether there’s a service mesh underneath the covers, so to speak, it’s kind of immaterial usually or almost exclusively for developers or not? I mean, what do they need to know? I mean, then what, what or how does their lives or how did their lives change either way when there’s a service mesh or not.
Zack Butcher: There’s a kind of a mix? Right. And part of this depends on how your organization has decided to approach a service mesh. And part of it depends on kind of how mature you are in that path. Right. So in the extreme, it kind of in a fully mature organization, the real goal and this is the goal with DevOps right. The goal with DevOps is to get developers doing the operations for their own services. Right. Is to get them involved in production. And, you know, whatever we say about the phrase, that idea is good. And in the extreme, a service mesh enables that right. It gives you the ability to put in the hands of individual developers that control over how their application behaves at a higher level way without having to go change code and things like that. So that and I believe that for most organizations that are adopting a mesh that is a desirable instinct, that any of their developers can go kind of reach under the hood and use the mesh to make their applications better, to achieve whatever they need to achieve with it using the mesh. But then there’s the question of like, how do you get them into that point and how do you actually enable successful adoption in an organization in that.
Zack Butcher: Right. And so what we typically see is that the path for adoption starts with hiding the mesh. Right. Get the people operating the system, the kind of platform to get them to install a mesh and start to use it, start to onboard teams, start to provide some of the underlying visibility with it, start to provide some of the underlying security with it, maybe just do broad traffic related things. Right, that are kind of one size fits all. And then as they gain confidence, start to do more with it with respect to things like traffic management, start to give their own developers more control as they as they get more confident. So I think it’s a spectrum. Right. And then the other side of that, too, is how much does your organization have a pass or try to hide underlying infrastructure in general is going to have an influence on the amount that a developer needs to interact with a mesh or has to interact with a mesh. In general, the instinct should be developers should be able to control their own traffic. Probably the platform team should be able to control the other things.
Bruce Cameron Gain: And we had kind of touched on this before we started our conversation or the recording excuse me, we’re talking about maybe there are alternatives out there that it’s platforms where there is indeed a service mesh type of functioning or functionality. But even for the operations team, it’s transparent and they don’t really have to worry about managing that. Is that a viable scenario or is this maybe something that is being promised that might not really work?
Andrew Jenkins: There are definitely platforms that kind of include baked in service meshes and kind of management around a service mesh. I would say that their goal is to sort of make the the downsides as transparent as possible in managing and upgrading and things like that. But hopefully there’s the upsides are still sort of service. Observability should still be driven by the mesh. The kinds of policies that you can enact or traffic management that you can do is still driven by the mesh. And so in that sense, your developers or operations folks, the platform team, are still interacting with the mesh, even if they don’t have to interact with it as a completely separate component. And so I think there’s still the fundamental principles of service mesh apply. It’s just that there are some cases where all the choices that the platform has made around how it’s going to use a service mesh may match up one to one with your organization. And so therefore, there’s no benefit for you swapping that out and doing it all yourself. That happens sometimes. But then I’ll say that we’re also, I guess by nature, seeing a lot of cases where we’re talking to users who want a little bit of a deeper level of control. They want to be able to do some special things in the service mesh, you know, even as simple as adopting their own upgrade path for the service mesh component or having it be consistent across different platforms where they may want to do or make some choices differently than what the platforms already made. But in all cases, hopefully your developers are getting to utilize the benefits of the service mesh, whether it’s sort of baked into the platform or whether it’s something that a platform team is operating at a more custom level.
Bruce Cameron Gain: And as far as observability goes, it’s and with Istio, it seemed as if observability is of micro services, because it seems that as if that is the key capability of Istio. Would you agree or not?
Andrew Jenkins: I’d agree that it’s the first thing out of the box that makes a positive impact in your life as a developer. I’ll say that.
Zack Butcher: Yeah, for sure. I can totally agree with that. Right. Like as far as day one experience goes, that is or day zero, like observability is the key from it. I would argue that identity is the single most important feature of a service mesh and in fact, identity is kind of the key thing that it does and everything else stems from identity. But that’s kind of partly philosophical. And then we can go into the weeds on that one. But yeah, in terms of user facing features, observability definitely wows from the start.
Bruce Cameron Gain: Is it possible to maybe just a few sentences, dig down a little deeper? Why identity is such a key feature?
Zack Butcher: Communication doesn’t matter unless you know who you’re communicating to and with. Right. What metrics are you producing? What are the metrics about? Unless you know, the client, a server that you’re communicating, how can you know, so everything in the system really stems from knowing who you’re communicating with and from that sense of identity. Right. And from that we can have policy. From that we can talk about how traffic flows and where traffic flows. Right. What is a destination in your message to send traffic to the thing with an identity we need the name for? We need a handle for it first. So what do you report metrics on? It’s a service. That’s a thing with an identity. It really all stems from having service as a reified concept, assigning identity to them at runtime, being able to use that at runtime to know who you’re actually talking with and everything else in it kind of follows from that. So that’s why I say it’s a little philosophical in that like, yes, we can’t communicate without having an identity. Yes, we can. But who are you really talking to and how can you trust those metrics? How can you trust that communication and what is actually happening there unless you know. And so that’s why I say it.
Bruce Cameron Gain: Andrew, is that is that a feature that was prevalent or very wow feature for you at the beginning that might have evolved and changed?
Andrew Jenkins: It’s a key part of the scaling beyond just one cluster right. This identity problem is like something approaching tractable in a tiny, self-contained environment like one Kubernetes cluster or something like that. But as you start distributing this planet scale or across data centers or with organizations that are hybrid or with a system that’s so large that it changes so quickly that it’s really hard, just to write down all of the identities of everything all at once in one place. Then you need something smarter and more flexible. And it’s already built into service mesh this ability to handle identity at a large and very flexible and very rapidly iterating scale. And so I don’t but that’s kind of like the day zero kind of thing. I think that it wasn’t first on a lot of users minds as a thing that they need. And unfortunately, because it’s already built in, it may actually be one of the things that is harder to notice, that it was so key to helping you sort of scale up. But it is absolutely crucial. It’s the part of security that’s like it actually is somewhat of a solvable problem to be able to talk to some pod in some other Kubernetes cluster. That’s not it. It’s about just like Zack said, knowing what it is, knowing who is the other end of the thing that you’re talking to and then being able to use that as a foundation for policy and all this other stuff.
Bruce Cameron Gain: The new version of Istio had just been released. But what’s the key feature or what do you love about it the most? And what do the people migrating to Kubernetes today and looking at a service mesh? What are they going to like?
Andrew Jenkins: I have two answers here. One is really boring and that’s good. Support for these – this is an important for me – elliptic curve crypto certs for TLS between pods, which is totally not all that mind blowing of a feature, but it shows kind of the state that Istio is in where now it’s got a lot of capacity to circle back and flesh out requirements, make sure that we adopt organizational requirements, policies, things like that. So that’s just a great example of the kind of maturity side on Istio. The other thing that’s been kind of developing over a couple of releases and is getting more and more mature and is really big in one five is WebAssembly support. And that’s going to be a way to extend Istio and especially the side of our Envoy proxy with a more portable and rapidly evolving way, rather than having to build very low level components in the system. And I think that that’s going to be great because it will allow developers to extend the capabilities, the service mesh. But without all of that having to happen in this crowded core where stability is an extremely important concern and that can be a natural drag on innovation. So we’re kind of opening up the WebAssembly front allows us to sort of do both stability and an open door for innovation.
Bruce Cameron Gain: And that’s mainly relegated to the JavaScript side of things. Or is that maybe a wider thing?
Andrew Jenkins: So WebAssembly is cool, because it’s kind of like the concept of JavaScript that, hey, it’s this language that can run anywhere and it’s in everybody’s browser. So it’s that conceptually, but with a lot of the technical reasons why JavaScript might not be a great fit for kind of low level applications. So you can while WebAssembly is this output format. You can do input into WebAssembly in many different programming languages like or JavaScript if you want. And that’s an important part of of broadening that that ecosystem.
Bruce Cameron Gain: That’s fascinating so that’s kind of working on the giving programmers, engineers accessibility to improving things for the application experience with the infrastructure changes and configurations possibly. Is that correct?
Andrew Jenkins: Yeah, yeah.
Bruce Cameron Gain: And Zack, what are your thoughts about that?
Zack Butcher: Yeah, Andrew took all my answers. No, I think the single biggest thing for me about Istio one point six is that like it’s kind of a boring release in a lot of respects. And I think that that is like the ultimate goal of any infrastructure project. Right. Like I in many respects, I am very happy when there are not big earth shattering features. Right. So I look at things like upgrading to one point six, for many people will be the first time that they use the operator to do an upgrade, because that was, I believe, made default in one five, maybe it was one four. But things like making the lifecycle management easier going into this next release are some of the big things that I think are really, really big and key for people, whether simply, like Andrew said, I think is is really, really going to be an awesome enabling technology in the future. Just when you would ask about JavaScript there today, actually Envoy only supports C++ and Rust for WebAssembly in there and Go is in very early stages as well. So actually there’s not even JavaScript SDKs to use with Envoy today because Envoy has to expose an API and when we program it, you need an API, a handle to that API in your programming language. Right. And so today there’s only been C++ and Rust that have been implemented semi-officially and then there’s a Go one as well. So those are the big things in my mind is just keep it going and keep upgrades are easier. I think you need even less configuration than ever before now to to do the installation and upgrade as well. And so those to me are kind of the big and exciting things now. The more boring an Istio release notes can be, the happier I am, because I think that that shows how the project is maturing, how we’re able to spend time going back and addressing not the 80 percent use cases, but the 20 percent use cases. Right. And that to me, is the really interesting stuff.
Bruce Cameron Gain: To what are you working as far as under the Tetrate umbrella? What are you working on now to solve those 20 percent use cases?
Zack Butcher: Yeah, so generally speaking, what we’ve kind of what we’ve been doing is working hand in hand with some companies and getting at a large scale, getting service mesh into production with them. Right. And what are all of the things that need to happen to make that happen? You talk a little bit about kind of single pane of glass stuff. So we’re building out that kind of thing. Right. As an organization, I need centralized controls and that kind of thing. And so that’s the general theme kind of build out the sets of tooling and infrastructure required to get a mesh actually adopted in a real large, large enterprise.
Bruce Cameron Gain: And Andrew, as far as Aspen Mesh goes, what what are some of the challenges that you’re working on at this time?
Andrew Jenkins: Yeah, we’re really talking in circles and building on each other here. So I think right now, Aspen Mesh is taking a turn around some of the release and integration stuff, that is something that if it’s done right, it’s really powerful and advancing for a project. But it’s not necessarily anybody’s absolute favorite thing to do. And so we’ve been we’re stepping up to the plate around some of that at this point, there’s also there’s been some security stuff, interestingly, kind of going back to Zack’s discussion around identity. We have some users who have existing very large systems that had concepts of identity based around existing kind of concepts like domain names and TLS infrastructure. And so helping bridge the gap between what what they’re doing now, what they want to do in the future. And this migration of there will never be – there’s no way that we can just jump to the future. We’re going to have to evolve point wise from from where we are to where we’re going. So a lot of that is adding some foundational components to make sure that those identities are flexible enough to address the the use cases that are not quite as easy as, I’ve got a brand new fresh container application that I’m just going to stand in my Kubernetes cluster. It’s this brownfield hybrid environment.
Bruce Cameron Gain: Excellent. And you were one of the earlier developers of service mesh. If I as I understand, I was wondering if you could describe maybe briefly that you know how that’s evolved. And you did already a little bit about some of the many wrong turns that have been made, especially for the open source projects. But in Istio, where is this all going at the same time? What’s next for Istio and service meshes?
Andrew Jenkins: I’ve worked on projects even before the coining of service mesh or the term Istio. I worked on projects around how to connect applications flexibly and especially kind of as as things move to containers. Istio really changed the game in terms of broad open source adoption and an API that really natively matched up to a developer API that policy objects and things that really natively matched up to Kubernetes very well. And so that’s why Aspen Mesh is built around Istio as kind of a foundational component and why we do some of the things in the community that we do to help. We keep the underlying project project healthy. Going into the future of service meshes, I do think now that people have got it in their hands, they’re getting it into their clusters more and more. They’re starting to build applications that don’t necessarily bring along all of the components that a service mesh also provides. They’re starting to say, oh, we actually can delegate all of that stuff to the service mesh. I think that there’s going to be two big fronts that we’ll see. One is service meshes that span and interact across infrastructure components. So it won’t just be a Kubernetes cluster. You will have service meshes that your organization manages that may include virtual machines, that may include many different Kubernetes clusters. They may stitch all these things together in a way that’s secure, that maintains identity, that’s still observable. So that’s one that’s kind of adoption across a bunch of different clusters. And the second one is novel ways of deploying and managing applications built on capabilities of a service mesh. So this is kind of the progressive delivery and canary rollouts and things like that. I think that’s been a wishlist item for a lot of large organizations. And I think that with Kubernetes, containerisation and things like a service mesh, it’s going to be a lot more practical for them to actually start building on that and getting value in their application lifecycle.
Bruce Cameron Gain: And again, I think based on what you both said, it might move in the direction of being more applicable to the data center and on premises, model it as you are migrating to cloud native environment. But maybe we’re going to move in a direction where the service mesh will be more applicable to the on premises deployments as well?
Zack Butcher: Yeah, yeah, for sure. That’s like that’s actually a primary thing that Tetrate works on. And when I talked earlier in the podcast about the fact that data centers are not going away, we’re going to have over the next 40, 50 years. Right. That’s exactly acknowledging that the mesh has to span this heterogeneous infrastructure. I’ll use the term legacy is a dirty word because it’s not – that’s actually the stuff that’s making money in most organizations. Right. It’s the you know, so you have to go back into what is the brownfield. So I think that’s one of the big areas that it’s going. I think if I and I think Andrew is exactly right. So we’ll see better development experience, see it become more pervasive across more environments, that kind of thing. If I just even a little bit longer view, maybe this is a little bit too far. But I think Andrew is exactly correct that in the near term of the next five years or so. If we look a little bit further, I get to work with, like some folks, like the Open Networking Foundation a decent bit, and they do some really interesting things around adding software defined networking and telco standards and stuff like that. And where we see it going in there really, really long term is that it just becomes part of the network. Right. If you look at kind of the dream of what Istio wants to do, if you look at the capabilities that Envoy has, SDN is kind of approaching this from the bottom up, Envoy and Istio and these ecosystems are kind of approaching it from the top down, from the application down. And I think the real beauty is going to be eventually we’re going to meet up right and these kind of these capabilities that the mesh brings are going to be a transparent and ambient part of the network that you’re in. And that’s the beauty and boring, right? That’s when that’s when we’ve made it right. When when your service is like in the kernel and it’s just boring and it just does it. That’s the goal.
Bruce Cameron Gain: Even for the operations folks. Right? I mean, that’s – that already, the developers get to do their magic. They can do their fun work, they get to create their applications. And then the operations folks are struggling with the security more so. And maybe they’re trying to look at ways to automate things. And maybe in five years they’re not going to have to worry about service mesh as you said. And at the same time, for the for the developers folks, it’s going to be business as usual, except maybe as far as what you brought up before, the programming languages are going to be more applicable or the choice of the menu of programming languages for certain applications used with a service mesh become much larger at the same time. So I guess you would have both, the best of both worlds.
Zack Butcher: Yeah, I think the real goal is that eventually as an application developer, what I really want to be able to do is guarantee quality of service for my application. Right. So I want to be able to say, hey, for these types of traffic, this is the quality of service that my application needs to provide and the network should go and do what is required to implement that quality of service, whether that’s pushing it to switch pipelines – we can do, we can do per request HTTP packet switch – in a switch I can handle HTTP packets if I really need to write or I can do it in the NFV or I can do it in userspace. Right. And so we’re going to see kind of trade offs on that spectrum transparent to the user, but based on things like quality of service. And that’s where I hope we can start to get away from this. I think of almost like programming a service mesh today, like publishing individual routes before we had BGP right. You know, it’s very manual, very finicky, very one off. And we need the sets of technology that start to make it kind of more automatic, more transparent and just work completely.
Andrew Jenkins: This is the exact right analogy. When developers start today, they don’t really worry about how to retry packets over the network because the network might be unreliable and lose packets. That’s a solved problem decades ago, but it’s built in. They don’t worry about parsing HTTP request responses. It’s built into some some library that they can use. But they have had to worry about some higher level reliability concerns or addressing addressability concerns, things like that. And as Zack says, when we get to the end state and it’s just sort of pervasive and built then we’ll know success because there will be a whole new class of things that they don’t have to worry about. And we’re starting to see that in some environments. This already happened in containers in Kubernetes. That is, you can already delegate, hey, how do I find the best instance of this service, you can delegate that down to a service mesh. How do I make sure that I’m talking to a secured version of this, that I know the identity of? A service mesh can do that. And so if this is universal for all programs everywhere, because of a combination of service mesh implementations like Istio and equivalent capabilities in NICs and switches and things like that, then that’s that’s a massive success because this is the whole developer thing right. Now there’s a whole class of problems that they don’t have to worry about, that don’t slow them down. They can focus on the next higher level thing.
Bruce Cameron Gain: Well, I wanted to thank you both very much. Zack Butcher, founding engineer, of Tetrate, and Andrew Jenkins, co-founder and CTO, Aspen Mesh.
Voiceover: Listen to more episodes of The New Stack Makers at thenewstack.io/podcasts, please rate and review us on iTunes, like us on YouTube and follow us on SoundCloud. Thanks for listening and see you next time.
Voiceover: Aspen Mesh provides a simpler and more powerful distribution of Istio through a service mesh policy framework, a simpler user experience delivered through the Aspen Mesh UI and a fully supported, tested and hardened distribution of Istio that makes it viable to operate service mesh in the enterprise.