Aspen Mesh digital transformation service mesh

Digital Transformation: How Service Mesh Can Help

Your Company’s Digital Transformation

It’s happening everywhere, and it’s happening fast. In order to meet consumers head on in the best, most secure ways, enterprises are jumping on the digital transformation train (check out this Forrester report). 

Several years ago, digital transformations saw companies moving from monolithic architectures towards microservices and Kubernetes, but service mesh was in its infancy. No one knew they'd need something to help manage service-to-service communication. Now, with increasing complexity and demands coupled with thinly-stretched resources or teams without service mesh expertise, supported service mesh is becoming a good solution for many--especially for DevOps teams.

Service Mesh for DevOps

"DevOps" is a term used to describe the business relationship between development and IT operations. Mostly, the term is used when referring to improving communication and collaboration between the two teams. But while Dev is responsible for creating new functionality to drive business, Ops is often the unsung--but extremely important--hero behind the scenes. In IT Ops, you’re on the hook for strategy development, system design and performance, quality control, direction and coordination of your team all while collaborating with the Dev team and other internal stakeholders to achieve your business’s goals and drive profitability. Ultimately, it’s the Dev and Ops teams who are responsibility to ensure that teams are communicating effectively, systems are monitored correctly, high customer satisfaction is achieved and projects and issue resolution are completed on time. A service mesh can help with this by enabling DevOps.

Integrating a Service Mesh: Align with Business Objectives

As you think about adopting a service mesh, keep in mind that your success over time is largely dependent on aligning with your company’s business objectives. Sharing business objectives like these with your service mesh team will help to ensure you get--and keep--the features and capabilities that you really need, when you need them, and that they stay relevant.

What are some of your company’s business objectives? Here are three we’ve identified that a service mesh can help to streamline:

1. Automating More Process (i.e. Removing Toil)
Automating processes frees up your team from mundane tasks so they can focus on more important projects. Automation can save you time and money.

2. Increasing Infrastructure Performance
Building and maintaining a battle-tested environment is key to your end users experience, and therefore churn or customer retention rates and your company’s bottom line.

In addition, much of your time is spent developing strategies to monitor your systems and working through issue resolution as quickly as possible--whether issues pop up during the workday, or in the middle of the night. Fortunately, because service mesh come with observability, security and resilience features, it can help alleviate these responsibilities, decreasing MTTD and MTTR.

3. Maintaining Delivery to Customers
Reducing friction in the user experience is the name of the game these days, so UX and reliability are key to keeping your end users happy. If you’re looking at a service mesh, you’re already using a microservices architecture, and you’re likely using Kubernetes clusters. But once those become too complex in production--or don’t have all the features you need-- it’s time to add a service mesh into the mix. Service mesh’s observability features like cluster health monitoring, service traffic monitoring, easy debugging and root cause identification with distributed tracing help with this. In addition, an intuitive UI is key to surfacing these features in a way that is easy to understand and manipulate, so make sure you’re looking at a service mesh that’s easy for your Dev team to use. This will help provide a more seamless (and secure) experience for your end users.

Evolution; Not Revolution

How do you actually go about approaching the process of integrating a service mesh? What will drive success is for you to have agility and stability. But that can be a tall order, so it can be helpful to approach integrating a service mesh as evolution, rather than revolution. Three key areas to consider while you’re evaluating a service mesh include:

  1. Mitigating risk
  2. Production readiness
  3. Policy frameworks

Mitigating Risk
Risk can be terrifying, so it’s imperative to take steps to ensure that risk is mitigated as much as possible. The only time your company should be making headlines is because of good news. Ensuring security, compliance, and data integrity is the way to go. With security and compliance at top of mind for many, it’s important to address security head on. 

With a well-designed enterprise service mesh, you can expect plenty of security, compliance and policy features so it’s easy for your company to get a zero-trust network. Features can include anything from ensuring the principle of least privilege and secure default settings to technical features such as fine-grained RBAC and incremental mTLS.

Production Readiness
Your applications are ready to be used by your end users, and your technology stack needs to be ready too. What makes a real impact here is reliability. Service mesh features like dynamic request routing, fast retries, configuration vetters, circuit breaking and load balancing greatly increase the resiliency of microservice architectures. Support is also a feature that some enterprises will want to consider in light of whether service mesh expertise is a core in-house skill for the business. Having access to an expert support team can make a tremendous difference in your production readiness and your end users’ experiences.

Policy Frameworks
While configuration is useful for setting up how a system operates, policy is useful in dictating how a system responds when something happens. With a service mesh, the power of policy and configuration combined provides capabilities that can drive outcome-based behavior from your applications. A policy catalog can accelerate this behavior, while analytics examines policy violations and understands the best actions to take. This improves developer productivity with canary, authorization and service availability policies.

How to Measure Service Mesh Success

No plan is complete without a way to measure, iterate and improve your success over time. So how do you go about measuring the success of your service mesh? There are a lot of factors to take into consideration, so it’s a good idea to talk to your service mesh provider in order to leverage their expertise. But in the meantime, there are a few things you can consider to get an idea of how well your service mesh is working for you. Start by finding a good way to measure 1) how your security and compliance is impacted, 2)  how much you’re able to change downtime and 3) differences you see in your efficiency.

Looking for more specific questions to ask? Check out the eBook, Getting the Most Out of Your Service Mesh for ideas on the right questions to ask and what to measure for success.


Service Mesh Landscape - Aspen Mesh

The Service Mesh Landscape

Where A Service Mesh Fits in the Landscape

Service mesh is helping to take the cloud native and open source communities to the next level, and we’re starting to see increased adoption across many types of companies -- from start-ups to the enterprise. 

For any company, while a service mesh overlaps, complements, and in some cases replaces many tools that are commonly used to manage microservices, many technologies are involved in the service mesh landscape. In the following, we've explained some ways that a service mesh fits with other commonly used container tools.

Service Mesh Landscape - Aspen Mesh

Container Orchestration

Kubernetes provides scheduling, auto-scaling and automation functionality that solves most of the build and deploy challenges that come with containers. Where it leaves off, and where service mesh steps in, is solving some critical runtime challenges with containerized applications. A service mesh adds uniform metrics, distributed tracing, encryption between services and fine-grained observability of how your cluster is behaving at runtime. Read more about why container orchestration and service mesh are critical for cloud native deployments

API Gateway

The main purpose of an API gateway is to accept traffic from outside your network and distribute it internally. The main purpose of a service mesh is to route and manage traffic within your network. A service mesh can work with an API gateway to efficiently accept external traffic then effectively route that traffic once it’s in your network. There is some nuance in the problems solved at the edge with an API Gateway compared to service-to-service communication problems a service mesh solves within a cluster. But with the evolution of cluster-deployment patterns, these nuances are becoming less important. If you want to do billing, you’ll want to keep your API Gateway. But if you’re focused on routing and authentication, you can likely replace an API gateway with service mesh. Read more on how API gateways and service meshes overlap.

Global ADC

Load balancers focus on distributing workloads throughout the network and ensuring the availability of applications and services. Load balancers have evolved into Application Delivery Controllers (ADCs) that are platforms for application delivery, ensuring that an organization’s critical applications are highly available and secure. While basic load balancing remains the foundation of application delivery, modern ADCs offer much more enhanced functionality such as SSL/TLS offload, caching, compression, rate-shaping, intrusion detection, application firewalls and remote access into a single strategic point. A service mesh provides basic load balancing, but if you need advanced capabilities such as SSL/TLS offload and rate-shaping you should consider pairing an ADC with service mesh.

mTLS

Service mesh provides defense with mutual TLS encryption of the traffic between your services. The mesh can automatically encrypt and decrypt requests and responses, removing that burden from the application developer. It can also improve performance by prioritizing the reuse of existing, persistent connections, reducing the need for the computationally expensive creation of new ones. Aspen Mesh provides more than just client server authentication and authorization, it allows you to understand and enforce how your services are communicating and prove it cryptographically. It automates the delivery of the certificates and keys to the services, the proxies use them to encrypt the traffic (providing mutual TLS), and periodically rotates certificates to reduce exposure to compromise. You can use TLS to ensure that Aspen Mesh instances can verify that they’re talking to other Aspen Mesh instances to prevent man-in-the-middle attacks.

CI/CD

Modern Enterprises manage their applications via an agile, iterative lifecycle model.  Continuous Integration and Continuous Deployment systems automate the build, test, deploy and upgrade stages.  Service Mesh adds power to your CI/CD systems, allowing operators to build fine-grained deployment models like canary, A/B, automated dev/stage/prod promotion, and rollback.  Doing this in the service mesh layer means the same models are available to every app in the enterprise without app modification. You can also up-level your CI testing using techniques like traffic mirroring and fault injection to expose every app to complicated, hard-to-simulate fault patterns before you encounter them with real users.

Credential Management 

We live in an API economy, and machine-to-machine communication needs to be secure.  Microservices have credentials to authenticate themselves and other microservices via TLS, and often also have app-layer credentials to serve as clients of external APIs. It’s tempting to focus only on the cost of initially configuring these credentials, but don’t forget the lifecycle – rotation, auditing, revocation, responding to CVEs. Centralizing these credentials in the service mesh layer reduces scope and improves the security posture.

APM

Traditional Application Performance Monitoring tools provide a dashboard that surfaces data that allow users to monitor their applications in one place. A service mesh takes this one step further by providing observability. Monitoring is aimed at reporting the overall health of systems, so is best limited to key business and systems metrics derived from time-series based instrumentation. Observability focuses on providing highly granular insights into the behavior of systems along with rich context, perfect for debugging purposes. Aspen Mesh provides deep observability that allows you to understand current state of your system, and also provide a way to better understand system performance and behavior, even during the what can be perceived as normal operation of a system. Read more about the importance of observability in distributed systems.

Serverless

Serverless computing transforms source code into running workloads that execute only when called. The key difference between service mesh and serverless is that with serverless, a service can be scaled down to 0 instances if the system detects that it is not being used, thus saving you from the cost of continually having at least one instance running. Serverless can help organizations reduce infrastructure costs, while allowing developers to focus on writing features and delivering business value. If you’ve been paying attention to service mesh, these advantages will sound familiar. The goals with service mesh and serverless are largely the same – remove the burden of managing infrastructure from developers so they can spend more time adding business value. Read more about service mesh and serverless computing.

Learn More

If you'd like to learn more about how a service mesh can help you and your company, schedule a time to talk with one of our experts, or take a look at The Complete Guide to Service Mesh.


How Delphi Simplifies Kubernetes Security with Aspen Mesh

Customer Story: How Delphi Simplifies Kubernetes Security with Aspen Mesh

Delphi and Zero-Trust Security

Delphi delivers software solutions that help professional liability insurers streamline their operations and optimize their business processes. Operating in the highly regulated healthcare industry, privacy and compliance concerns such as HIPAA and APRA mandate a highly secure environment. As such, a Zero-trust environment is of utmost importance for Delphi and their customers. 

The infrastructure team at Delphi has fully embraced a cloud-native stack to deliver the Delphi Digital Platform to its customers. The team leverages Kubernetes to effectively manage builds and deploys. Delphi planned to use Kubernetes from the start, but was looking for a simpler security solution for their infrastructure that could be managed without implementations in each service. 

While Delphi was getting tremendous value from Kubernetes, they needed to find an easier way to bake security into the infrastructure. Taking advantage of a service mesh was the obvious solution to address this challenge, as it provides cluster-wide mTLS encryption. 

The team chose Istio to confront this problem, and while the initial solution included setting up a certificate at the load balancer, this had open http between the load balancer and service. Unfortunately, this was not acceptable in a highly regulated healthcare industry with strict requirements to keep personal data secure. 

Achieving Security with a Service Mesh

To solve these challenges, Delphi engaged with Aspen Mesh in order to implement an end-to-end encrypted solution, from Client to back end SaaS applications. This was achieved by enabling mTLS mesh-wide from service to service and creating custom Istio policy manifests to integrate cert-manager and Let's Encrypt for client-side encryption. As a result, Delphi is able to provide secure ingress integration for a multitenant B2C environment, allowing Delphi to deploy a fully scalable solution. 

[Read the Full Case Study Here]

This Aspen Mesh solution lets Delphi use Let’s Encrypt seamlessly with Istio, removing the need to consider building security into application development and placing it into an infrastructure solution that is highly scalable. Leveraging the power of Kubernetes, Istio and Aspen Mesh, the Delphi team is delivering a highly secure platform to their customers without the need to implement encryption in each service. 

“At this point, I look at Aspen Mesh as an extension of my team” 

- Bill Reeder, Delphi Technology Lead Architect


How to Approach Zero-Trust Security with a Service Mesh

How to Approach Zero-Trust Security with a Service Mesh

Last year was challenging for data security. In the first nine months alone, there were 5,183 breaches reported with 7.9 billion records exposed. Compared to mid-year 2018, the total number of breaches was up 33.3 percent and the total number of records exposed more than doubled, up 112 percent.

Zero Trust Security 2019

What does this tell us? That, despite significant technology investments and advancements, security is still hard. A single phishing email, missed patch, or misconfiguration can let the bad guys in to wreak havoc or steal data. For companies moving to the cloud and the cloud-native architecture of microservices and containerized applications, it’s even harder. Now, in addition to the perimeter and the network itself, there’s a new network infrastructure to protect: the myriad connections between microservice containers.

With microservices, the surface area available for attack has increased exponentially, putting data at greater risk. Moreover, network-related problems like access control, load balancing, and monitoring that had to be solved once for a monolith application now must be handled separately for each service within a cluster.

Zero-Trust Security and Service Mesh

Security is the most critical part of your application to implement correctly. A service mesh allows you to handle security in a more efficient way by combining security and operations capabilities into a transparent infrastructure layer that sits between the containerized application and the network. Emerging today to address security in this environment is the convergence of the Zero-Trust approach to network security and service mesh technology.

Here are some examples of attacks that a service mesh can help mitigate:

  • Service impersonation
    • A bad actor gains access to the private network for your applications, pretends to be an authorized service, and starts making requests for sensitive data.
  • Unauthorized access
    • A legitimate service makes requests for sensitive data that it is not authorized to obtain.
  • Packet sniffing
    • A bad actor gains access to your applications private network and captures sensitive data from legitimate requests going over the network.
  • Data exfiltration
    • A bad actor sends sensitive data out of the protected network to a destination of their choosing.

So how can the tenets of Zero-Trust security and how a service mesh enable Zero Trust in the microservices environment? And how can Zero-Trust capabilities help organizations address and demonstrate compliance with stringent industry regulations?

Threats and Securing Microservices

Moat and Castle ApproachTraditionally, network security has been based on having a strong perimeter to help thwart attackers, commonly known as the moat-and-castle approach. With a secure perimeter constructed of firewalls, you trust the internal network by default, and by extension, anyone who’s there already. Unfortunately, this was never a reliably effective strategy. But more importantly, this approach is becoming even less effective in a world where employees expect access to applications and data from anywhere in the world, on any device. In fact, other types of threats -- such as insider threats -- have generally been considered by most security professionals to be among the highest threats to data protected by companies, leading to more development around new ways to address these challenges.

In 2010, Forrester Research coined the term “Zero Trust” and overturned the perimeter-based security model with a new principle: “never trust, always verify.” That means no individual or machine is trusted by default from inside or outside the network. Another Zero-Trust precept: “assume you’ve been compromised but may not yet be aware of it.” With the time to identify and contain a breach running at 279 days in 2019, that’s not an unsafe assumption.

Starting in 2013, Google began its transition to implementing Zero Trust into its networking infrastructure with much success and has made the results of their efforts open to the public in BeyondCorp. Fast forward to 2019 and the plans to adopt this new paradigm have spread across industries like wildfire, largely in response to massive data breaches and stricter regulatory requirements.

While there are myriad Zero-Trust networking solutions available for protecting the perimeter and the operation of corporate networks, there are many new miles of connections within the microservices environment that also need protection. A service mesh provides critical security capabilities such as observability to aid in optimizing MTTD and MTTR, as well as ways to implement and manage encryption, authentication, authorization, policy control and configuration in Kubernetes clusters.

Security Within the Kubernetes Cluster

While there are myriad Zero-Trust networking solutions available for protecting
the perimeter and the operation of corporate networks, there are many new miles of connections within the microservices environment that also need protection. A service mesh provides critical security capabilities such as observability to aid in optimizing MTTD and MTTR, as well as ways to implement and manage encryption, authentication, authorization, policy control and configuration in Kubernetes clusters.

Here are a few ways to approach enhancing your security with a service mesh:

  • Simplify microservices security with incremental mTLS
  • Manage identity, certificates and authorization
  • Access control and enforcing the level of least privilege
  • Monitoring, alerting and observability

A service mesh also adds controls over traffic ingress and egress at the perimeter. Allowed user behavior is addressed with with role-based access control (RBAC). With these controls, the Zero-Trust philosophy of “trust no one, authenticate everyone” stays in force by providing enforceable least privilege access to services in the mesh.

Aspen Mesh can help you to achieve a Zero-Trust security posture by applying these concepts and features. As an enterprise- and production-ready service mesh that extends the capabilities of Istio to address enterprise security and compliance needs, we also provide an intuitive hosted user interface and dashboard that make it easier to deploy, monitor, and configure these features.

Learn More About Zero-Trust Security and Service Mesh

Interested in learning more about how service mesh can help you achieve Zero-Trust security? Get the free white paper by completing the form below.




Managing Service Mesh Policy - Aspen Mesh

Managing Service Mesh Policy

Managing Service Mesh Policy - Aspen MeshPicture this: You’re the director of engineering at an enterprise organization. You have had a successful career managing small engineering teams and you’re now balancing the demands of managing an engineering organization while contributing to overall planning and strategy as part of senior staff.

You see a future with your company where you can grow your influence by more closely tying your organization’s work to the bottom line of the business. You have many responsibilities, including ensuring that your team is able to deliver well-behaved, resilient and intuitive applications that provide amazing user experiences.

Your policies are critical as they specify how your application responds after an action. When your policy works well, your stakeholders are happy. Sometimes, policies are guardrails, as well, so that the mistakes of engineers can’t cause failures on the user side. They could be optimizers, such as boosting network efficiency by automatically running clusters where it’s cheapest. They could also fix or mitigate faults, such as when an enhanced shopping cart is unhealthy, a more basic cart could be implemented instead. Security, access and scheduling policies all encode what response should happen automatically when an event occurs.

Your policy is obviously not working well when problems create more work for your team and cause your end-user to suffer. Among the greatest fears of those in the DevOps world is waking up to read about an outage or breach the team caused, either directly or indirectly, that you read about in the news.

Agility + Stability = Win

Agility is a company’s number-one business advantage — it’s the catalyst for digital transformation, enabling companies to define new ways of working. The need to stay agile is why companies like yours are looking to develop new architectures and embrace microservices and container technologies, such as Kubernetes and Istio.

Fun fact: According to F5’s “2019 State of Application Service Report,” 56% of the organizations surveyed were already employing containers and 69% were executing digital transformation by leveraging containers in order to meet increasing customer demands.”

But we all know that agility alone won’t help your company reach its goals. Agility plus stability will be your number one competitive advantage. When you’re able to meet evolving customer needs (while staying out of the news for downtime and security breaches), your competitors will be eating your dust.

Service Mesh and Policy

The result of companies embracing DevOps and microservice architectures is that teams can move faster and more autonomously than ever before. While that means faster time to market for applications, it also means more risk to the business.

So, who’s responsible for understanding and managing the company’s security and compliance requirements? You’ve got it — application teams that may not have the experience or desire to take on this burden.

The good news is that some service meshes allow you to remove the infrastructure burden from application teams in order to let platform operators handle it. Service mesh policy allows you to make disparate, ephemeral microservices act as a resilient system through controlling how services communicate with each other as well as with external systems. It also allows engineers to easily implement policies that can be mapped to application behavior outcomes, ensuring great end-user experiences.

Here are some additional benefits you can expect from the service mesh policy:

  • Provide a better user experience: Meet SLOs and SLAs and make it clear that business objectives are being met by system behavior.
  • Optimize cost: Service mesh can help you get the ideal mix of cost savings and uptime.
  • Decrease risk: Being secure and compliant and ensuring data integrity is key to your company’s success.
  • Drive progressive delivery: Decouple developers from the business side, so your dev team is free to develop as they like, but your business controls when customer-facing features are pushed.

Policy Frameworks: Making Policy Easier to Manage

Many companies cope with the headache of specifying policy in several different places using many different tools. This adds risks around failures in compliance, increases the effort to modify policies and creates challenges in ensuring policies are both correct and applied appropriately to applications. Policy frameworks can help to relieve that pain, making it easy to create, test, review and improve policy — even when it includes contributions from many different roles in an organization.

Look for options that allow you to build on policy features sets by providing:

  • Advanced policy frameworks that allow users to describe business goals that are enforced in the application’s runtime environment.
  • A tested and hardened policy catalog that makes it easy to implement policies without having to build them yourself.
  • Role-based policy management that enables teams within organizations to create and apply policies according to their needs.
  • Streamlined application deployment packages that provide a uniform approach to API authentication and authorization with JWTs, mutual TLS and secure Ingress.
  • Deploying and scaling applications globally obeying your compliance rules and business-driven cost optimization goals.
  • Integration into GitOps or other tech workflows and a graphical user interface.

In other words, a service mesh allows you to remove the burden of managing infrastructure from application teams. It is also emerging as an essential tool for platform operators to manage Kubernetes platforms. Other capabilities a service mesh offers includes being able to make disparate microservices act as a resilient system through controlling how services communicate with each other and with external systems while managing it through a single control plane. Additionally, a service mesh allows engineers to easily implement policies that can be mapped to application behavior outcomes, making it easy to ensure great end-user experiences.

The next time you’re thinking about how to solve these challenges, take a look at some service meshes and policy frameworks to see if they could help.

If you'd like to learn more about how policy frameworks can help you get more out of a service mesh, schedule a time to talk through more details with one of our experts.


Service Mesh For App Owners

Service Mesh for App Owners

How Service Mesh Can Benefit Your Applications

You’ve heard the buzz about service mesh, and if you're like most App Owners, that means you have a lot of questions. Is it something that will be worthwhile for your company to adopt? What are business outcomes service mesh provides? Can it help you better manage your microservices? What are some measurements of success to think about when you’re considering or using service mesh?

To start with, here are five key considerations for evaluating service mesh:

  1. Consider how a service mesh supports your organization's strategic vision and objectives
  2. Have someone in your organization take inventory of your technical requirements and your current systems
  3. Identify resources needed (internal or external) for implementation – all the way through to running your service mesh
  4. Consider how timing, cost and expertise will impact the success of your service mesh implementation
  5. Design a plan to implement, run, measure and improve over time

Business Outcomes From a Service Mesh

As an App Owner, you’re ultimately on the hook for business outcomes at your company. When you're considering adding new tech to your stack, consider your strategies first. What do you plan to accomplish, and how do you intend to make those accomplishments become a reality? 

Whatever your answers may be, if you're using microservices, a service mesh is worth investigating. It has the potential to help you get from where you are to where you want to be -- more securely, and faster.

But apart from just reaching your goals faster and more securely, a service mesh can offer a lot of additional benefits. Here are a few:

  • Decreasing risk
  • Optimizing cost
  • Driving better application behavior
  • Progressive delivery 
  • Gaining a competitive advantage

Decreasing Risk

Risk analysis. Security. Compliance. These topics are priority one, if you want to stay out of the news. But a service mesh can help to provide your company with better -- and provable -- security and compliance.

Security & Compliance

Everyone’s asking a good question: What does it take to achieve security in cloud native environments?

We know that there are a lot of benefits in cloud-native architectures: greater scalability, resiliency and separation of concerns. But new patterns also bring new challenges like ephemerality and new security threats.

With an enterprise service mesh, you get access to observability into security status, end-to-end encryption, compliance features and more. Here are a few security features you can expect from a service mesh:

  • mTLS status at-a-glance: Easily understand the security posture of every service in your cluster
  • Incremental mTLS: Control exactly what’s encrypted in your cluster at the service or namespace level
  • Fine-grained RBAC: Enforce the level of least privilege to ensure your organization does not create a security concern
  • Egress control: Understand and control exactly what your services are talking to outside your clusters

Optimizing Cost

Every business needs cost optimizations. How do you choose which are going to make an impact and which aren’t? Which are most important? Which are you going to use?

As you know, one aspect to consider is talent. Your business does better when your people are working on new features and functionality rather than spending too much of their time on bug fixes. Applications, like service mesh, can help boost your development team’s productivity, allowing them to spend more time working on new business value adds and differentiators rather than bug fixes and maintenance.

But internal resources aren’t the only thing to consider. Without end-users, your company wouldn’t exist. It’s becoming increasingly important to provide a better user experience for both your stakeholders as well as your customers.

A service mesh provides help to applications running on microservice architectures rather than monolithic architectures. Microservices natively make it easier to build and maintain applications, greater agility, faster time to market and more uptime.

A service mesh can help you get the ideal mix of these cost savings and uptime.

Driving Better Application Behavior 

What happens when a new application wants to be exposed to the internet? You need to consider how to secure it, how to integrate it into your existing user-facing APIs, how you'll upgrade it and a host of other concerns. You're embracing microservices, so you might be doing this thing a lot. You want to drive better application behavior. Our advice here? You should use a service mesh policy framework to do this consistently, organization-wide.

Policy is simply a term for describing the way a system responds when something happens. A service mesh can help you improve your company’s policies by allowing you to: 

  1. Provide a clean interface specification between application teams who make new functionality and the platform operators who make it impactful to your users
  2. Make disparate microservices act as a resilient system through controlling how services communicate with each other and external systems and managing it through a single control plane
  3. Allow engineers to easily implement policies that can be mapped to application behavior outcomes, making it easy to ensure great end user experiences

An enterprise service mesh like Aspen Mesh enables each subject-matter expert in your organization to specify policies that enable you to get the intended behavior out of your applications and easily understand what that behavior will be. You can specify, from a business objective level, how you want your application to respond when something happens and use your service mesh to implement that.

Progressive Delivery

Continuous delivery has been a driving force behind software development, testing and deployment for years, and CI/CD best-practices are evolving with the advent of new technologies like Kubernetes and Istio. Progressive delivery, a term coined by James Governor, is a new approach to continuous delivery that includes “a new basket of skills and technologies… such as canarying, feature flags, [and] A/B testing at scale”.  

Progressive delivery decouples LOB and IT by allowing the business to say when it’s acceptable for new code to hit the customer. This means that the business can put guardrails around the customer experience through decoupling dev cycles and service activation. 

With progressive delivery:

  • Deployment is not the same as release
  • Service activation is not the same as deployment
  • The developer can deploy a service, you can ship the service, but that doesn't mean you're activating it for all users

Progressive delivery provides a better developer experience and also allows you to limit the blast radius of new deployments with feature flags, canary deploys and traffic mirroring. 

Gaining A Competitive Advantage

To stay ahead of your competition, you need an edge. Many sizes of companies across industries benefit from microservices or a service mesh. Enterprise companies evaluating or using a service mesh come in lots of different flavors -- those who are just starting, going through or those who have completed a digital transformation, companies shifting from monoliths to microservices, and even organizations using microservices who are working to  identify areas for improvement. 

Service Mesh Success Measurements

How do you plan to measure success with your service mesh? Since service mesh is new and evolving, it can be difficult to know what to look for in order to get a real pulse on how well it’s working for your company.

Start by asking some questions like these:

  1. Saving Resources: Is your team is more efficient with a service mesh? How much more time are they able to spend on feature and function developments rather than bug fixes and maintenance? 
  2. Your Users' Experience: Do you have a complete picture of your customers' experience and know the most valuable places to improve? How much more successful are deployments to production?
  3. Increasing Efficiency: How much time do you spend figuring out which microservice is causing an issue? Does your service mesh save you time here?

These are just a few ways to think about how your service mesh is working for you, as well as a built-in way to identify areas to improve over time. As with any really useful application, it's not just a one-and-done implementation. You'll have greater success by integrating measurement, iteration and improvement into your digital transformation and service mesh strategies.

Interested in learning more about service mesh? Check out the eBook Getting the Most Out of Your Service Mesh.


What a Service Mesh Provides

If you’re like most people with a finger in the tech-world pie, you’ve heard of a service mesh. And you know what a service mesh is. And now you’re wondering what it can solve for you.

A service mesh is an infrastructure layer for microservices applications that can help reduce the complexity of managing microservices and deployments by handling infrastructure service communication quickly, securely and reliably. Service meshes are great at solving operational challenges and issues when running containers and microservices because they provide a uniform way to secure, connect and monitor microservices. 

A good service mesh keeps your company’s services running they way they should, giving you and your team access to the powerful tools that you need — plus access to engineering and support — so you can focus on adding the most value to your business.

Want to learn more about this? Check out the free Complete Guide to Service Mesh.

Next, let’s dive into three key areas where a service mesh can really help: observability, security and operational control.

Observability

Are you interested in taking your system monitoring a step further? A service mesh provides monitoring plus observability. While monitoring reports overall system health, observability focuses on highly granular insights into the behavior of systems along with rich context.

Deep System Insights

Kubernetes seemed like the way to rapid iteration and quick development sprints, but the promise and the reality of managing containerized applications at scale are two very different things.

Service mesh - Observability

Docker and Kubernetes enable you to more easily build and deploy apps. But it’s often difficult to understand how those apps are behaving once deployed. So, a service mesh provides tracing and telemetry metrics that make it easy to understand your system and quickly root cause any problems.

An Intuitive UI

A service mesh is uniquely positioned to gather a trove of important data from your services. The sidecar approach places an Envoy sidecar next to every pod in your cluster, which then surfaces telemetry data up to the Istio control plane. This is great, but it also means a mesh will gather more data than is useful. The key is surfacing only the data you need to confirm the health and security status of your services. A good UI solves this problem, and it also lowers the bar on the engineering team, making it easier for more members of the team to understand and control the services in your organization’s architecture.

Security

A service mesh provides security features aimed at securing the services inside your network and quickly identifying any compromising traffic entering your cluster. A service mesh can help you more easily manage security through mTLS, ingress and egress control, and more.

mTLS and Why it Matters

Securing microservices is hard. There are a multitude of tools that address microservices security, but service mesh is the most elegant solution for addressing encryption of on-the-wire traffic within the network.

Service mesh - Security

Service mesh provides defense with mutual TLS (mTLS) encryption of the traffic between your services. The mesh can automatically encrypt and decrypt requests and responses, removing that burden from the application developer. It can also improve performance by prioritizing the reuse of existing, persistent connections, reducing the need for the computationally expensive creation of new ones. With service mesh, you can secure traffic over the wire and also make strong identity-based authentication and authorizations for each microservice.

We see a lot of value in this for enterprise companies. With a good service mesh, you can see whether mTLS is enabled and working between each of your services and get immediate alerts if security status changes.

Ingress & Egress Control

Service mesh adds a layer of security that allows you to monitor and address compromising traffic as it enters the mesh. Istio integrates with Kubernetes as an ingress controller and takes care of load balancing for ingress. This allows you to add a level of security at the perimeter with ingress rules. Egress control allows you to see and manage external services and control how your services interact with them.

Operational Control

A service mesh allows security and platform teams to set the right macro controls to enforce access controls, while allowing developers to make customizations they need to move quickly within these guardrails.

RBAC

A strong Role Based Access Control (RBAC) system is arguably one of the most critical requirements in large engineering organizations, since even the most secure system can be easily circumvented by overprivileged users or employees. Restricting privileged users to least privileges necessary to perform job responsibilities, ensuring access to systems are set to “deny all” by default, and ensuring proper documentation detailing roles and responsibilities are in place is one of the most critical security concerns in the enterprise.

Service Mesh - Operational Control

We’ve worked to solve this challenge by providing Istio Vet, which is designed to warn you of incorrect or incomplete configuration of your service mesh, and provide guidance to fix it. Istio Vet prevents misconfigurations by refusing to allow them in the first place. Global Istio configuration resources require a different solution, which is addressed by the Traffic Claim Enforcer solution.

The Importance of Policy Frameworks

As companies embrace DevOps and microservice architectures, their teams are moving more quickly and autonomously than ever before. The result is a faster time to market for applications, but more risk to the business. The responsibility of understanding and managing the company’s security and compliance needs is now shifted left to teams that may not have the expertise or desire to take on this burden.

Service mesh makes it easy to control policy and understand how policy settings will affect application behavior. In addition, analytics insights help you get the most out of policy through monitoring, vetting and policy violation analytics so you can quickly understand the best actions to take.

Policy frameworks allow you to securely and efficiently deploy microservices applications while limiting risk and unlocking DevOps productivity. Key to this innovation is the ability to synthesize business-level goals, regulatory or legal requirements, operational metrics, and team-level rules into high performance service mesh policy that sits adjacent to every application.

A good service mesh keeps your company’s services running they way they should, giving you observability, security and operational control plus access to engineering and support, so you are free to focus on adding more value to your business.

If you’d like to learn more about this, get your free copy of the Complete Guide to Service Mesh here.

 

 


Aspen Mesh - Getting the Most Out of Your Service Mesh

How to Get the Most Out of Your Service Mesh

You’ve been hearing about service mesh. You have an idea of what it does and how it can help you manage your microservices. But what happens once you have one? How do you get as much out of it as you can?

Let’s start with a quick review of what a service mesh is, why you would need one, then move on to how to get the most out of your service mesh.

What's a Service Mesh?

  1. A transparent infrastructure layer that sits between your network and application, helping with communications between your microservices

  2. Could be your next game changing decision

A service mesh is designed to handle a high volume of service-to-service communication using application programming interfaces (APIs). It ensures that communication among containerized application services is fast, reliable and secure. The mesh provides critical capabilities including service discovery, load balancing, encryption, observability, traceability, authentication and authorization, and write-once, run anywhere policy for microservices in your Kubernetes clusters.

Service meshes also address challenges that arise when your application is being consumed by an end user. The first key capability is monitoring the health of services provided to the end user, and then tracing problems with that health quickly to the correct microservice. Next, you'll need to ensure communication is secure and resilient.

When Do You Need a Service Mesh?

We’ve been having lots of discussions with people spread across the microservices, Kubernetes and service mesh adoption curves. And while it’s clear that many enterprise organizations are at least considering microservices, many are still waiting to see best practices emerge before deciding on their own path forward. That means the landscape changes as needs are evolving. 

As an example, more organizations are looking to microservices for brownfield deployments, whereas – even a couple of years ago – almost everyone only considered building microservices architectures for greenfield. This tells us that as microservices technology and tooling continues to evolve, it’s becoming more feasible for non-unicorn companies to effectively and efficiently decompose the monolith into microservices. 

Think about it this way: in the past six months, the top three reasons we’ve heard people say they want to implement service mesh are:

  1. Observability – to better understand the behavior of Kubernetes clusters 
  2. mTLS – to add cluster-wide service encryption
  3. Distributed Tracing – to simplify debugging and speed up root cause analysis

Gauging the current state of the cloud-native infrastructure space, there’s no doubt that there’s still more exploration and evaluation of tools like Kubernetes and Istio. But the gap is definitely closing. Companies are closely watching the leaders in the space to see how they are implementing and what benefits and challenges they are facing. As more organizations successfully adopt these new technologies, it’s becoming obvious that while there’s a skills gap and new complexity that must be accounted for, the outcomes around increased velocity, better resiliency and improved customer experience mandates that many organizations actively map their own path with microservices. This will help to ensure that they are not left behind by the market leaders in their space.

Getting the Most Out of Your Service Mesh

Aspen Mesh - Getting the Most Out of Your Service MeshIn order to really stay ahead of the competition, you need to know best practices about getting the most out of your service mesh, recommendations from industry experts about how to measure your success, and ways to think about how to keep getting even more out of your technology.

But what do you want out of a service mesh? Since you’re reading this, there’s a good chance you’re responsible for making sure that your end users get the most out of your applications. That’s probably why you started down the microservices path in the first place.

If that’s true, then you’ve probably realized that microservices come with their own unique challenges, such as:

  • Increased surface area that can be attacked
  • Polyglot challenges
  • Controlling access for distributed teams developing towards a single application

That’s where a service mesh comes in. Service meshes are great at solving operational challenges and issues when running containers and microservices because they provide a uniform way to secure, connect and monitor microservices. 

TL;DR a good service mesh keeps your company’s services running they way they should, giving you the observability, security and traffic management capabilities you need to effectively manage and control containerized applications so you can focus on adding the most value to your business.

When Service Mesh is a Win/Win

Service mesh is an application that can help entire organizations work together for better outcomes. In other words, service mesh is the ultimate DevOps enabler.

Here are a few highlights of the value a service mesh provides across teams:

  • Observability: take system monitoring a step further by providing observability. Monitoring reports overall system health, while observability focuses on highly granular insights into the behavior of systems along with rich context
  • Security and Decreased Risk: better secure the services inside your network and quickly identify any compromising traffic entering your clusters
  • Operational Control: allow security and platform teams to set the right macro controls to enforce access controls, while allowing developers to make customizations they need to move quickly within defined guardrails
  • Increase Efficiency with a Developer Toolbox: remove the burden of managing infrastructure from the developer and provide developer-friendly features such as distributed tracing and easy canary deploys 

What’s the Secret to Getting the Most Out of Your Service Mesh?

There are a lot of things you can do to get more out of your service mesh. Here are three high level tactics to start with:

  1. Align on service mesh goals with your teams
  2. Choose the service mesh that can be broadly deployed to address your company's needs
  3. Measure your service mesh success over time in order to identify and make improvements

Still looking for more info about this? Check out the eBook: Getting the Most Out of Your Service Mesh.

Complete this form to get your copy of the eBook Getting the Most Out of Your Service Mesh:



Understanding Service Mesh

The Origin of Service Mesh

In the beginning, we had packets and packet-switched networks.

Everyone on the Internet — all 30 of them — used packets to build addressing, session establishment/teardown. And then, they’d need a retransmission scheme. Then, they’d build an ordered byte stream out of it.

Eventually, they realized they had all built the same thing. The RFCs for IP and TCP standardized this, operating systems provided a TCP/IP stack, so no application ever had to turn a best-effort packet network into a reliable byte stream.

We took our reliable byte streams and used them to make applications. Turns out that a lot of those applications had common patterns again — they requested things from servers, and then got responses. So, we separated these request/responses into metadata (headers) and body.

HTTP standardized the most widely deployed request/response protocol. Same story. App developers don't have to implement the mechanics of requests and responses. They can focus on the app on top.

There's a newer set of functionality that you need to build a reliable microservices application. Service discovery, versioning, zero-trust.... all the stuff popularized by the Netflix architecture, by 12-factor apps, etc. We see the same thing happening again - an emerging set of best practices that you have to build into each microservice to be successful.

So, service mesh is about putting all that functionality again into a layer, just like HTTP, TCP, packets, that's underneath your code, but creating a network for services rather than bytes.

Questions? Download The Complete Guide to Service Mesh or keep reading to find out more about what exactly a service mesh is.

What Is A Service Mesh?

A service mesh is a transparent infrastructure layer that sits between your network and application.

It’s designed to handle a high volume of service-to-service communications using application programming interfaces (APIs). A service mesh ensures that communication among containerized application services is fast, reliable and secure.

The mesh provides critical capabilities including service discovery, load balancing, encryption, observability, traceability, authentication and authorization, and the ability to control policy and configuration in your Kubernetes clusters.

Service mesh helps address many of the challenges that arise when your application is being consumed by the end user. Being able to monitor what services are communicating with each other, if those communications are secure, and being able to control the service-to-service communication in your clusters are key to ensuring applications are running securely and resiliently.

More Efficiently Managing Microservices

The self-contained, ephemeral nature of microservices comes with some serious upside, but keeping track of every single one is a challenge — especially when trying to figure out how the rest are affected when a single microservice goes down. The end result is that if you’re operating or developing in a microservices architecture, there’s a good chance part of your days are spent wondering what the hell your services are up to.

With the adoption of microservices, problems also emerge due to the sheer number of services that exist in large systems. Problems like security, load balancing, monitoring and rate limiting that had to be solved once for a monolith, now have to be handled separately for each service.

Service mesh helps address many of these challenges so engineering teams, and businesses, can deliver applications more quickly and securely.

Why You Might Care

If you’re reading this, you’re probably responsible for making sure that you and your end users get the most out of your applications and services. In order to do that, you need to have the right kind of access, security and support. That’s probably why you started down the microservices path.

If that’s true, then you’ve probably realized that microservices come with their own unique challenges, such as:

  1. Increased surface area that can be attacked
  2. Polyglot challenges
  3. Controlling access for distributed teams developing on a single application

That’s where a service mesh comes in.

A service mesh is an infrastructure layer for microservices applications that can help reduce the complexity of managing microservices and deployments by handling infrastructure service communication quickly, securely and reliably. 

Service meshes are great at solving operational challenges and issues when running containers and microservices because they provide a uniform way to secure, connect and monitor microservices. 

Here’s the point: a good service mesh keeps your company’s services running they way they should. A service mesh designed for the enterprise, like Aspen Mesh, gives you all the observability, security and traffic management you need — plus access to engineering and support, so you can focus on adding the most value to your business.

And that is good news for DevOps.

The Rise of DevOps - and How Service Mesh Is Enabling It

It’s happening, and it’s happening fast.

Companies are transforming internal orgs and product architectures along a new axis of performance. They’re finding more value in iterations, efficiency and incremental scaling, forcing them to adopt DevOps methodologies. This focus on time-to-market is driving some of the most cutting-edge infrastructure technology that we have ever seen. Technologies like containers and Kubernetes, and a focus on stable, consistent and open APIs allow small teams to make amazing progress and move at the speeds they require. These technologies have reduced the friction and time to market.

The adoption of these technologies isn’t perfect, and as companies deploy them at scale, they realize that they have inadvertently increased complexity and de-centralized ownership and control. In many cases, it’s challenging to understand the entire system.

A service mesh enables DevOps teams by helping manage this complexity. It provides autonomy and freedom for development teams through a stable and scalable platform, while simultaneously providing a way for platform teams to enforce security, policy and compliance standards.

This empowers your development teams to make choices based on the problems they are solving rather than being concerned with the underlying infrastructure. Dev teams now have the freedom to deploy code without the fear of violating compliance or regulatory guidelines, and platform teams can put guardrails in place to ensure your applications are secure and resilient.

Want to learn more? Get the Complete Guide to Service Mesh here.


From NASA to Service Mesh

The New Stack recently published a podcast featuring our CTO, Andrew Jenkins discussing How Service Meshes Found a Former Space Dust Researcher. In the podcast, Andrew talks about how he moved from working on electrical engineering and communication protocols for NASA to software and finally service mesh development here at Aspen Mesh.

“My background is in electrical engineering, and I used to work a lot more on the hardware side of it, but I did get involved in communication, almost from the physical layer, and I worked on some NASA projects and things like that,” said Jenkins. “But then my career got further and further up into the software side of things, and I ended up at a company called F5 Networks. [Eventually] this ‘cloud thing’ came along, and F5 started seeing a lot of applications moving to the cloud. F5 offers their product in a version that you use in AWS, so what I was working on was an open source project to make a Kubernetes ingress controller for the F5 device. That was successful, but what we saw was that a lot of the traffic was shifting to the inside of the Kubernetes cluster. It was service-to-service communication from all these tiny things--these microservices--that were designed to be doing business logic. So this elevated the importance of communication...and that communication became very important for all of those tiny microservices to work together to deliver the final application experience for developers. So we started looking at that microservice communication inside and figuring out ways to make that more resilient, more secure and more observable so you can understand what’s going on between your applications.”

In addition, the podcast covers the evolution of service mesh, more details about tracing and logging, canaries, Kubernetes, YAML files and other surrounding technologies that extend service mesh to help simplify microservices management.

“I hope service meshes become the [default] way to deal with distributed tracing or certificate rotation. So, if you have an application, and you want it to be secure, you have to deal with all these certs, keys, etc.,” Jenkins said. “It’s not impossible, but when you have microservices, you do not have to do it a whole lot more times. So that’s why you get this better bang for the buck by pushing that down into that service mesh layer where you don’t have to repeat it all the time.”

To listen to the entire podcast, visit The News Stack’s post.

Interested in reading more articles like this? Subscribe to the Aspen Mesh blog: