Top 3 Service Mesh Developments in 2020

In 2019, we saw service mesh move beyond an experimental technology and into a solution that organizations are beginning to learn is an elemental building block for any successful Kubernetes deployment. Adoption of service mesh at scale, across companies large and small, began to gain steam. As the second wave of adopters watched the cutting edge adopters trial and succeed with service mesh technology, they too began to evaluate service mesh to address the challenges Kubernetes leaves on the table. 

In tandem with growing adoption of service mesh, 2019 offered a burgeoning service mesh market. Istio and Linkerd keep chugging along, and the tooling and vendor ecosystem around Istio almost tripled throughout the year. But there were also many new players that entered the market providing alternative approaches to solving layer 7 networking challenges. Meshes, such as those Kuma and Maesh offer, have emerged to provide different approaches to service mesh in order to address various edge use cases. We also saw the introduction of tools like SMI Spec and Meshery attempt to engage an early market that is flourishing due to immense opportunity, but has yet to contract while key players are waiting for the market to choose the winners first. Adjacent projects like Network Service Mesh bring service mesh principles to lower layers of the stack.

While there is still much to be settled in the service mesh space, the value of service mesh as a technology pattern is clear, as evidenced by the recently released “Voice of the Enterprise: DevOps,” 1H2019 survey conducted by 451 Research

While still a nascent market, the interest in and plan to adopt service mesh as a critical piece of infrastructure is quickly catching up to that of Kubernetes and containers. 

Service Mesh in 2020: The Top-3 Developments 

1. A quickly growing need for service mesh

Kubernetes is exploding. It has become the preferred choice for container orchestration in the enterprise and in greenfield deployments. There are real challenges that are causing brownfield to lag behind, but those are being explored and solved. Yes, Kubernetes is a nascent technology. And yes, much of the world is years away from adopting it. But it’s clear that Kubernetes has become--and will continue to be--a dominant force in the world of software. 

If Kubernetes has won and the scale and complexity of Kubernetes-based applications will increase, there is a tipping point where service mesh becomes all but required to effectively manage those applications. 

2. Istio Will Be Hard to Beat

There’s likely room for a few other contenders in the market, but we will see the market consolidation begin in 2020. In the long term, it’s probable that we’ll see a Kubernetes-like situation where a winner emerges and companies begin to standardize around that winner. It’s conceivable that service mesh may not be the technology pattern that is picked to solve layer 7 networking issues. But if that does happen, it seems likely that Istio becomes the de facto service mesh. There are many arguments for and against this, but the most telling factor is the ecosystem developing around Istio. Almost every major software vendor has an Istio solution or integration, and the Istio open source community far surpasses any others in terms of activity and contributions

3. Use Cases, Use Cases, Use Cases

2019 was the year where problems apt for service mesh to solve were identified. Early adopters chose the top-two or -three capabilities they wanted from service mesh and dove in. In the past year, the three most commonly requested solutions have been: 

  • mTLS
  • Observability 
  • Traffic management 

2020 will be the year that core service mesh use cases emerge and are used as models for the next wave of adopters to implement service mesh solutions. 

The top three uses cases that our customers ask for are:

  • Observability to better understand cluster status, quickly debug and more deeply understand systems to architect more resilient and stable systems moving forward
  • Leveraging service mesh policy to drive intended application behaviors
  • Enforcing and proving a secure and compliant environment
  • Technologies like WASM making it possible to distribute existing functionality to dataplane sidecars, as well as build new intelligence and programmability

If you are already using a service mesh, you understand the value it brings. If you’re considering a service mesh, pay close attention to this space and the number of uses cases will make the real-world value proposition clearer in the year ahead. At Aspen Mesh, we’re always happy to talk about service mesh, the best path to implementation and how our customers are solving problems. Feel free to reach out!


Improving Microservices: Weighing Service Mesh Options and Benefits

Microservices are a hell of a drug. 

Rapid development. Easier testing and deployment. Applications that are simpler to change and maintain. 

It’s easy to understand why microservice-based applications are becoming more and more common. Through microservice architectures, enterprises are realizing: 

  • Improved scalability
  • Increased development velocity
  • Easier debugging
  • Better alignment between development and user requirements 

As companies build or convert to more modern applications, they are leveraging microservices to drive differentiation and market leadership. As a side effect, they realize that they are increasing complexity and decentralizing ownership and control. These new challenges require new solutions to effectively monitor, manage and control microservice-based applications at runtime.

Kubernetes has become the defacto method for enterprises to orchestrate containers. Kubernetes simplifies the work of technical teams by automating application processes and service deployments that were previously performed manually. Kubernetes is a superb tool for managing containerized application deployment challenges but leaves some runtime challenges on the table. 

That’s where service mesh comes in. A service mesh like Aspen Mesh adds observability, security and policy capabilities to Kubernetes. A service mesh helps to ensure resiliency and uptime – it provides solutions that enable engineering teams to more effectively monitor, control and secure the modern application at runtime. Companies are adopting service mesh as a way to enhance Kubernetes, as it provides a toolbox of features that address various microservices challenges that modern enterprises are facing.

Thoughts on the state of microservices from OSCON

Having attended OSCON last week, it was interesting to have discussions with people spread across the microservices, Kubernetes and service mesh adoption curves. While it was clear that almost everyone is at least considering microservices, many are still waiting to see their peers implement before deciding on their own path forward. An interesting takeaway was that more and more organizations are looking to microservices for brownfield deployments, whereas even a couple of years ago almost everyone only considered building microservices architectures for greenfield. The conversations around brownfield signaled to me that as microservices technology and tooling continues to evolve, it is more feasible for non-unicorn companies to effectively and efficiently decompose the monolith into microservices. 

Another observation is that Kubernetes use is starting to catch up to the hype. The decision to use Kubernetes for container orchestration was nearly unanimous among the OSCON attendees I spoke with. They were in various phases of implementation with some still just running POCs to evaluate the best use cases, but many conversations were centered on how companies are running Kubernetes in production for mission critical applications. 

Among those humming along with Kubernetes, many were interested in service mesh as a way to extend or enhance what they are getting from Kubernetes. The top three reasons people said they want to implement service mesh were: 

  • Observability - to better understand the behavior of Kubernetes clusters 
  • mTLS - to add cluster-wide service encryption
  • Distributed Tracing - to simplify debugging and speed up RCA

Gauging the cloud-native infrastructure space after OSCON, there is no doubt that there is still more exploration and evaluation of tools like Kubernetes and Istio, but the gap is definitely closing. Companies are closely watching the leaders in the space to see how they are implementing and what benefits and challenges they are facing. As more organizations successfully adopt these new technologies, it’s becoming obvious that while there is a skills gap and new complexity that must be accounted for, the outcomes around increased velocity, better resiliency and improved customer experience mandates that many organizations actively map their own path with microservices. This will help to ensure that they are not left behind by the market leaders in their space.

Interested in reading more articles like this?

Subscribe to our blog here.


Top 3 Service Mesh Developments in 2019

Last year was about service mesh evaluation, trialing — and even hype.

While the interest in service mesh as a technology pattern was very high, it was mostly about evaluation and did not see widespread adoption. The capabilities service mesh can add to ease managing microservice-based applications at runtime are obvious, but the technology still needs to reach maturity before gaining widespread production adoption.

What we can say is service mesh adoption should evolve from the hype stage in a very real way this year.

What can we expect to see in 2019?

  1. The evolution and coalescing of service mesh as a technology pattern;
  2. The evolution of Istio as the way enterprises choose to implement service mesh;
  3. Clear uses cases that lead to wider adoption.

The Evolution of Service Mesh

There are several service mesh architectural options when it comes to service mesh, but undoubtedly, the sidecar architecture will see the most widespread usage in 2019. Sidecar proxy as the architectural pattern, and more specifically, Envoy as the technology, have emerged as clear winners for how the majority will implement service mesh.

Considering control plane service meshes, we have seen the space coalesce around leveraging sidecar proxies. Linkerd, with its merging of Conduit and release of Linkerd 2, got on the sidecar train. And the original sidecar control plane mesh, Istio, certainly has the most momentum in the cloud native space. A look at the Istio Github repo shows:

  • 14,500 stars;
  • 6,400 commits;
  • 300 contributors.

And if these numbers don’t clearly demonstrate the momentum of the project, just consider the number of companies building around Istio:

  • Aspen Mesh;
  • Avi Networks;
  • Cisco;
  • OpenShift;
  • NGINX;
  • Rancher;
  • Tufin Orca;
  • Tigera;
  • Twistlock;
  • VMware.

The Evolution of Istio

So the big question is where is the Istio project headed in 2019? I should start with the disclaimer that the following are all guesses. — they are well-informed guesses, but guesses nonetheless.

Community Growth

Now that Istio has hit 1.0, the number of contributors outside the core Google and IBM team are starting to grow. I’d hazard the guess that Istio will be truly stable around 1.3 sometime in June or July. Once the project gets to the point it is usable at scale in production, I think you’ll really see it take off.

Emerging Vendor Landscape

At Aspen Mesh, we hedged our bets on Istio 18 months ago. It seems to be becoming clear that Istio will win service mesh in much the same way Kubernetes has won container orchestration.

Istio is a powerful toolbox that directly addresses many microservices challenges that are being solved with multiple manual processes, or are not being solved at all. The power of the open source community surrounding it also seems to be a factor that will lead to widespread adoption. As this becomes clearer, the number of companies building on Istio and building Istio integrations will increase.

Istio Will Join the Cloud Native Computing Foundation

Total guess here, but I’d bet on this happening in 2019. CNCF has proven to be an effective steward of cloud-native open source projects. I think this will also be a key to widespread adoption which will be key to the long-term success of Istio. We shall see what the project founders decide, but this move will benefit everyone once the Istio project is at the point it makes sense for it to become a CNCF project.

Real-World Use Cases Are Key To Spreading Adoption

Service mesh is still a nascent market and in the next 12-24 months, we should see the market expand past just the early adopters. But for those who have been paying attention, the why of a service mesh has largely been answered. The whyis also certain to evolve, but for now, the reasons to implement a service mesh are clear. I think that large parts of the how are falling into place, but more will emerge as service mesh encounters real-world use cases in 2019.

I think what remains unanswered is “what are the real world benefits I am going to see when I put this into practice”? This is not a new question around an emerging technology. Neither will the way this question gets answered be anything new: and that will be through uses cases. I can’t emphasize enough how use cases based on actual users will be key.

Service mesh is a powerful toolbox, but only a small swath of users will care about how cool the tech is. The rest will want to know what problems it solves.

I predict 2019 will be the year of service mesh use cases that will naturally emerge as the number of adopters increases and begins to talk about the value they are getting with a service mesh.

Some Final Thoughts

If you are already using a service mesh, you understand the value it brings. If you’re considering a service mesh, pay close attention to this space and the number of uses cases will make the real world value proposition more clear. And if you’re not yet decided on whether or not you need a service mesh, check out the recent Gartner451 and IDC reports on microservices — all of which say a service mesh will be mandatory by 2020 for any organization running microservices in production.


Enterprise service mesh

Aspen Mesh Open Beta Makes Istio Enterprise-ready

As companies build modern applications, they are leveraging microservices to effectively build and manage them. As they do, they realize that they are increasing complexity and de-centralizing ownership and control. These new challenges require a new way to monitor, manage and control microservice-based applications at runtime.

A service mesh is an emerging pattern that is helping ensure resiliency and uptime - a way to more effectively monitor, control and secure the modern application at runtime. Companies are adopting Istio as their service mesh of choice as it provides a toolbox of different features that address various microservices challenges. Istio provide a solution to many challenges, but leaves some critical enterprise challenges on the table. Enterprises require additional features that address observability, policy and security. With this in mind, we have built new enterprise features into a platform that runs on top of Istio, to provide all the functionality and flexibility of open source, plus features, support and guarantees needed to power enterprise applications.

At KubeCon North America 2018, Aspen Mesh announced open beta. With Aspen Mesh you get all the features of Istio, plus:

Advanced Policy Features
Aspen Mesh provides RBAC capabilities you don’t get with Istio.

Configuration Vets
Istio Vet (an Aspen Mesh open source contribution to help ensure correct configuration of your mesh) is built into Aspen Mesh and you get additional features as part of Aspen Mesh that don’t come with open source Istio Vet.

Analytics and Alerting
The Aspen Mesh platform provides insights into key metrics (latency, error rates, mTLS status) and immediate alerts so you can take action to minimize MTTD/MTTR.

Multi-cluster/Multi-cloud
See multiple clusters that live in different clouds in a single place to see what’s going on in your microservice architecture through a single pane of glass.

Canary Deploys
Aspen Mesh Experiments lets you quickly test new versions of microservices so you can qualify new versions in a production environment without disrupting users.

An Intuitive UI
Get at-a-glance views of performance and security posture as well as the ability to see service details.

Full Support
Our team of Istio experts makes it easy to get exactly what you need out of service mesh. 

You can take advantage of these features for free by signing up for Aspen Mesh Beta access.


The New Aspen Mesh UI Makes Service Mesh Easy

The New Aspen Mesh UI Makes Service Mesh Easy

The service mesh space is quickly maturing. The service mesh toolbox provides a bevy of different features that can address different microservices challenges, and service meshes are ready to be used in production deployments. But what about the enterprise? They need additional features that address policy, configuration and a uniform view across distributed teams. With this in mind, we built a new user interface that will make it easier for you run a service mesh in your enterprise. A redesigned UI brings the most important information to the forefront so you can easily understand the real-time status of your mesh. Here's a quick look at some of the features to give you an idea what to expect with the new Aspen Mesh UI.  

  • See security and performance posture at a glance – service graph surfaces real time mTLS status and health scores provide visibility into performance
  • Go from macro to micro view – zoom from service graph view into namespace and workload details
  • View service details to quickly identify failures or bottlenecks – view and sort services by latency, error rate and health scores
  • Understand the health of your mesh and be alerted to any mesh configuration errors by our Istio Vet tool
  • Use Aspen Mesh Experiments to securely test and qualify new versions of microservices in your production environment without affecting users

Click play on the video to see it in action

 

Get your free Aspen Mesh account to take advantage of the new UI and a host of other features. 


service mesh

How The Service Mesh Space Is Like Preschool

I have a four year old son who recently started attending full day preschool. It has been fascinating to watch his interests shift from playing with stuffed animals and pushing a corn popper to playing with his science set (w00t for the STEM lab!) and riding his bike. The other kids in school are definitely informing his view of what cool new toys he needs. Undoubtedly, he could still make due with the popper and stuffed animals (he may sleep with Lambie until he's ten), but as he progresses his desire to explore new things increases.

Watching the community around service mesh develop is similar to watching my son's experience in preschool (if you're willing to make the stretch with me). People have come together in a new space to learn about cool new things, and as excited as they are, they don't completely understand the cool new things. Just as in preschool, there are a ton of bright minds that are eager to soak up new knowledge and figure out how to put it to good use.

Another parallel between my son and many of the people we talk to in the service mesh space is that they both have a long and broad list of questions. In the case of my son, it's awesome because they're questions like: "Is there a G in my name?" "What comes after Sunday?" "Does God live in the sky with the unicorns?" The questions we get from prospects and clients on service mesh are a bit different but equally interesting. It would take more time than anybody wants to spend to cover all these questions, but I thought it might be interesting to cover the top 3 questions we get from users evaluating service mesh.

What do I get with a service mesh?

We like getting this question because the answer to it is a good one. You get a toolbox that gives you a myriad of different capabilities. At a high level, what you get is observability, control and security of your microservice architecture. The features that a service mesh provide include:

  • Load balancing
  • Service discovery
  • Ingress and egress control
  • Distributed tracing
  • Metrics collection and visualization
  • Policy and configuration enforcement
  • Traffic routing
  • Security through mTLS

When do I need a service mesh?

You don't need 1,000 microservices for a service mesh to make sense. If you have nicknames for your monoliths, you're probably a ways away from needing a service mesh. And you probably don't need one if you only have 2 services, but if you have a few services and plan to continue down the microservices path it is easier to get started sooner. We are believers that containers and Kubernetes will be the way companies build infrastructure in the future, and waiting to hop on that train will only be a competitive disadvantage. Generally, we find that the answer to this question usually hinges on whether or not you are committed to cloud native. Service meshes like Aspen mesh work seamlessly with cloud native tools so the barrier to entry is low, and running cloud native applications will be much easier with the help of a service mesh.

What existing tools does service mesh allow me to replace?

This answer all depends on what functionality you want. Here's a look at tools that service mesh overlaps, what it provides and what you'll need to keep old tools for.

API gateway
Not yet. It replaces some of the functionality of a API gateway but does not yet cover all of the ingress and payment features an API gateway provides. Chances are API gateways and service meshes will converge in the future.

Tracing Tools
You get tracing capabilities as part of Istio. If you are using distributed tracing tools such as Jaeger or Zipkin, you no longer need to continue managing them separately as they are part of the Istio toolbox. With Aspen Mesh's hosted SaaS platform, we offer managed Jaeger so you don't even need to deploy or manage them.

Metrics Tools
Just like tracing, a metrics monitoring tool is included as part of Istio.With Aspen Mesh's hosted SaaS platform, we offer managed Prometheus and Grafana so you don't even need to deploy or manage them. Istio leverages Prometheus to query metrics. You have the option of visualizing them through the Prometheus UI, or using Grafana dashboards.

Load Balancing
Yep. Envoy is the sidecar proxy used by Istio and provides load balancing functionality such as automatic retries, circuit breaking, global rate limiting, request shadowing and zone local load balancing. You can use a service mesh in place of tools like HAProxy NGINX for ingress load balancing.

Security tools
Istio provides mTLS capabilities that address some important microservices security concerns. If you’re using SPIRE, you can definitely replace it with Istio which provides a more comprehensive utilisation of the SPIFFE framework. An important thing to note is that while a service mesh adds several important security features, it is not the end-all-be-all for microservices security. It’s important to also consider a strategy around network security.

If you have little ones and would be interested in comparing notes on the fantastic questions they ask, let’s chat. I'd also love to talk anything service mesh. We have been helping a broad range of customers get started with Aspen Mesh and make the most out of it for their use case. We’d be happy to talk about any of those experiences and best practices to help you get started on your service mesh journey. Leave a comment here or hit me up @zjory.


Container orchestration

Going Beyond Container Orchestration

Every survey of late tells the same story about containers; organizations are not only adopting but embracing the technology. Most aren't relying on containers with the same degree of criticality as hyperscale organizations. That means they are one of the 85% of organizations IDC found in a Cisco-sponsored survey of over 8000 enterprises are using containers in production. That sounds impressive, but the scale at which they use them is limited. In a Forrester report commissioned by Dell EMC, Intel, and Red Hat, 63% of enterprises using containers have more than 100 instances running. 82% expect to be doing the same by 2019. That's a far cry from the hundreds of thousands in use by hyperscale technology companies.

And though the adoption rate is high, that's not to say that organizations haven't dabbled with containers only to abandon the effort. As with any (newish) technology, challenges exist. At the top of the list for containers are suspects you know and love: networking and management.

Some of the networking challenges are due to the functionality available in popular container orchestration environments like Kubernetes. Kubernetes supports microservices architectures through its service construct. This allows developers and operators to abstract the functionality of a set of pods and expose it as "a service" with access via a well-defined API. Kubernetes supports naming services as well as performing rudimentary layer 4 (TCP-based) load balancing.

The problem with layer 4 (TCP-based) load balancing is its inability to interact with layer 7 (application and API layers). This is true for any layer 4 load balancing; it's not something unique to containers and Kubernetes. Layer 4 offers visibility into connection level (TCP) protocols and metrics, but nothing more. That makes it difficult (impossible, really) to address higher-order problems such as layer 7 metrics like requests or transactions per second and the ability to split traffic (route requests) based on path. It also means you can't really do rate limiting at the API layer or support key capabilities like retries and circuit breaking.

The lack of these capabilities drives developers to encode them into each microservice instead. That results in operational code being included with business logic. This should cause some amount of discomfort, as it clearly violates the principles of microservice design. It's also expensive as it adds both architectural and technical debt to microservices.

Then there's management. While Kubernetes is especially adept at handling build and deploy challenges for containerized applications, it lacks key functionality needed to monitor and control microservice-based apps at runtime. Basic liveliness and health probes don't provide the granularity of metrics or the traceability needed for developers and operators to quickly and efficiently diagnose issues during execution. And getting developers to instrument microservices to generate consistent metrics can be a significant challenge, especially when time constraints are putting pressure on them to deliver customer-driven features.

These are two of the challenges a service mesh directly addresses: management and networking.

How Service Mesh Answers the Challenge

Both are more easily addressed by the implementation of a service mesh as a set of sidecar proxies. By plugging directly into the container environment, sidecar proxies enable transparent networking capabilities and consistent instrumentation. Because all traffic is effectively routed through the sidecar proxy, it can automatically generate and feed the metrics you need to the rest of the mesh. This is incredibly valuable for those organizations that are deploying traditional applications in a container environment. Legacy applications are unlikely to be instrumented for a modern environment. The use of a service mesh and its sidecar proxy basis enable those applications to emit the right metrics without requiring code to be added/modified.

It also means that you don't have to spend your time reconciling different metrics being generated by a variety of runtime agents. You can rely on one source of truth - the service mesh - to generate a consistent set of metrics across all applications and microservices.

Those metrics can include higher order data points that are fed into the mesh and enable more advanced networking to ensure fastest available responses to requests. Retry and circuit breaking is handled by the sidecar proxy in a service mesh, relieving the developer from the burden of introducing operational code into their microservices. Because the sidecar proxy is not constrained to layer 4 (TCP), it can support advanced message routing techniques that rely on access to layer 7 (application and API).

Container orchestration is a good foundation, but enterprise organizations need more than just a good foundation. They need the ability to interact with services at the upper layers of the stack, where metrics and modern architectural patterns are implemented today.

Both are best served by a service mesh. When you need to go beyond container orchestration, go service mesh.


API Gateway vs Service Mesh

API Gateway vs Service Mesh

One of the recurring questions we get when talking to people about a service mesh is, "How is it different from an API gateway?" It's a good question. The overlap between API gateway and service mesh patterns is significant. They can both handle service discovery, request routing, authentication, rate limiting and monitoring. But there are differences in architectures and intentions. A service mesh's primary purpose is to manage internal service-to-service communication, while an API Gateway is primarily meant for external client-to-service communication.

API Gateway and service mesh

API Gateway and Service Mesh: Do You Need Both?

You may be wondering if you need both an API gateway and a service mesh. Today you probably do, but as service mesh evolves, we believe it will incorporate much of what you get from an API gateway today.

The main purpose of an API gateway is to accept traffic from outside your network and distribute it internally. The main purpose of a service mesh is to route and manage traffic within your network. A service mesh can work with an API gateway to efficiently accept external traffic then effectively route that traffic once it's in your network. The combination of these technologies can be a powerful way to ensure application uptime and resiliency, while ensuring your applications are easily consumable.

In a deployment with an API gateway and a service mesh, incoming traffic from outside the cluster would first be routed through the API gateway, then into the mesh. The API gateway could handle authentication, edge routing and other edge functions, while the service mesh provides fine-grained observability of and control of your architecture.

The interesting thing to note is that service mesh technologies are quickly evolving and are starting to take on some of the functions of an API gateway. A great example is the introduction of the Istio v1alpha3 routing API which is available in Aspen Mesh 1.0. Prior to this, Istio had used Kubernetes ingress control which is pretty basic so it made sense to use an API gateway for better functionality. But, the increased functionality introduced by the v1alpha3 API has made it easier to manage large applications and to work with with protocols other than HTTP, which was previously something an API gateway was needed to do effectively.

What The Future Holds

The v1alpha3 API provides a good example of how a service mesh is reducing the need for API gateway capabilities. As the cloud native space evolves and more organizations move to using Docker and Kubernetes to manage their microservice architectures, it seems highly likely that service mesh and API gateway functionality will merge. In the next few years, we believe that standalone API gateways will be used less and less as much of their functionality will be absorbed by service mesh.

If you have any questions about service mesh along the way, feel free to reach out.


Aspen Mesh Enterprise Service Mesh

Enabling the Financial Services Shift to Microservices

Financial services has historically been an industry riddled with barriers to entry. Challengers found it difficult to break through low margins and tightening regulations. However, large enterprises that once dominated the market are now facing disruption from smaller, leaner fintech companies that are eating away at the value chain. These disruptors are marked by technological agility, specialization and customer-centric UX. To remain competitive, financial services firms are reconsidering their cumbersome technical architectures and transforming them into something more adaptable. A recent survey of financial institutions found that ~85% consider their core technology to be too rigid and slow. Consequently, ~80% are expected to replace their core banking systems within the next five years.

Emerging regulations meant to address the new digital payment economy, such as PSD2 regulations in Europe, will require banks to adopt a new way to operate and deliver. Changes like PSD2 are aimed at bringing banking into the open API economy, driving interoperability and integration through open standards. To become a first class player in this new world of APIs, integration, and open data, financial services firms will need the advantages provided by microservices.

Microservices provide 3 key advantages for financial services

Enhanced Security

Modern fintech requirements create challenges to the established security infrastructure. Features like digital wallet, robo advisory and blockchain mandate the need for a new security mechanisms. Microservices follow a best practice of creating a separate identity service which addresses these new requirements.

Faster Delivery

Rapidly bringing new features to market is a cornerstone of successful fintech companies. Microservices make it easier for different application teams to independently deliver new functionality to meet emerging customer demands. Microservices also scale well to accommodate greater numbers of users and transactions..

Seamless Integration

The integration layer in a modern fintech solution needs a powerful set of APIs to communicate with other services, both internally and externally. This API layer is notoriously challenging to manage in a large monolithic application. Microservices make the API layer much easier to manage and secure through isolation, scalability and resilience.

Service mesh makes it easier to manage a complex microservice architecture

In the face of rapidly changing customer, business and regulatory requirements, microservices help financial services companies quickly respond to these changes.. But this doesn’t come for free. Companies take on increased operational overhead during the shift to microservices – technologies such as a service mesh can help manage that.

Service mesh provides a bundle of features around observability, security, and control that are crucial to managing microservices at scale. Previously existing solutions like DNS and configuration management provide some capabilities such as service discovery, but didn’t provide fast retries, load balancing, tracing and health monitoring. The old approach to managing microservices requires that you cobble together several different solutions each time a problem arises, but a service mesh bundles it all together in a reusable package. While it’s possible to accomplish some of what a service mesh manages with individual tools and processes, it’s manual and time consuming.

Competition from innovative fintech startups, along with ever increasing  customer expectations means established financial services players must change the way they deliver offerings and do business with their customers. Delivering on these new requirements is difficult with legacy systems. Financial services firms need a software architecture that’s fit for purpose – agile, adaptable, highly scalable, reliable and robust. Microservices make this possible, and a service mesh makes microservices manageable at scale.


Microservices challenges

How Service Mesh Addresses 3 Major Microservices Challenges

I was recently reading the Global Microservices Trends report by Dimensional Research and found myself thinking "a service mesh could help with that." So I thought I would cover those 3 challenges and how a service mesh addresses them. Respondents cited in the report make it clear microservices are gaining widespread adoption. It's also clear that along with the myriad of benefits they bring, there are also tough challenges that come as part of the package. The report shows:

91% of enterprises are using microservices or have plans to
99% of users report challenges with using microservices

Major Microservices Challenges

The report identifies a range of challenges companies are facing.

Companies are seeing a mix of technology and organizational challenges. I'll focus on the technological challenges a service mesh solves, but it's worth noting that one thing a service mesh does is bring uniformity so it's possible to achieve the same view across teams which can reduce the need for certain skills.

Each additional microservice increases the operational challenges

Not with a service mesh! A service mesh provides monitoring, scalability, and high availability through APIs instead of using discrete appliances. This flexible framework removes the operational complexity associated with modern applications. Infrastructure services were traditionally implemented as discrete appliances, which meant going to the actual appliance to get the service. Each appliance is unique which makes monitoring, scaling, and providing high availability for each appliance hard. A service mesh delivers these services inside the compute cluster itself through APIs and doesn’t require any additional appliances. Implementing a service mesh means adding new microservices doesn't have to add complexity.

It is harder to identify the root cause of performance issues

The service mesh toolbox gives you a couple of things that help solve this problem:

Distributed Tracing
Tracing provides service dependency analysis for different microservices and tracking for requests as they are traced through multiple microservices. It’s also a great way to identify performance bottlenecks and zoom into a particular request to define things like which microservice contributed to the latency of a request or which service created an error.

Metrics Collection
Another powerful thing you gain with service mesh is the ability to collect metrics. Metrics are key to understanding historically what has happened in your applications, and when they were healthy compared to when they were not. A service mesh can gather telemetry data from across the mesh and produce consistent metrics for every hop. This makes it easier to quickly solve problems and build more resilient applications in the future.

Differing development languages and frameworks

Another major challenge that report respondents noted facing was the challenge of maintaining a distributed architecture in a polyglot world. When making the move from monolith to microservices, many companies struggle with the reality that to make things work, they have to use different languages and tools. Large enterprises can be especially affected by this as they have many large, distributed teams. Service mesh provides uniformity by providing programming-language agnosticism, which addresses inconsistencies in a polyglot world where different teams, each with its own microservice, are likely to be using different programming languages and frameworks. A mesh also provides a uniform, application-wide point for introducing visibility and control into the application runtime, moving service communication out of the realm of implied infrastructure, to where it can be easily seen, monitored, managed and controlled.

Microservices are cool, but service mesh makes them ice cold. If you're on the microservices journey and are finding it difficult to manage the infrastructure challenges, a service mesh may be the right answer. Let us know if you have any questions on how to get the most out of service mesh, our engineering team is always available to talk.