Aspen Mesh Supports Istio Joining CNCF as Open Source Technology

Aspen Mesh has been a contributor to Istio since 2017 and we support today’s announcement that Istio will be donated to the Cloud Native Computing Foundation. It has been amazing to see the growth, not only in the functionality and quality of the project but also in the diversity and participation of the community over the years. We are excited about the future and continuing our involvement and contributions to the project.  


With the move to CNCF, we look forward to continued vendor-neutral governance and the increased participation by open source developers around the world. As the deployments of Istio continue to increase and be adopted by cloud, enterprise and service provider companies, the number and diversity of use cases will continue to grow. This donation to the premier open source cloud foundation ensures that all voices can be heard. As Istio continues to mature as the leading service-mesh open source technology, so will its governance, quality, ease of use and feature-set. 

Istio’s application to become a CNCF Project is the first step in becoming a CNCF technology. Aspen Mesh is proud to have played a role in advancing Istio’s technology. We congratulate the Istio team and look forward to many future successes together! 



engineering efficiency

How to Achieve Engineering Efficiency with a Service Mesh

As the idea for Aspen Mesh was formulating in my mind, I had the opportunity to meet with a cable provider’s engineering and operations teams to discuss the challenges they had operating their microservice architecture. When we all gathered in the large, very corporate conference room and exchanged the normal introductions, I could see that something just wasn’t right with the folks in the room. They looked like they had been hit by a truck. The reason for that is what turned this meeting into one of the most influential meetings of my life.

It turned out that the entire team had been up all night working on an outage in some of the services that were part of their guide application. We talked about the issue, how it manifested itself and what impact it had on their customers. But there was one statement that has stuck with me since: “The worst part of this 13-hour outage was that it took us 12 hours to get the right person on the phone; and only one hour to get it fixed…”

That is when I knew that a service mesh could solve this problem and increase the engineering efficiency for teams of all sizes. First, by ensuring that in day-to-day engineering and operations, experts were focused on what they were experts of. And second, when things went sideways, it was the strategic point in the stack that would have all the information needed to root-cause a problem — but also be the place that you could rapidly restore your system.

Day-to-Day Engineering and Operations

A service mesh can play a critical role in day-to-day engineering and operations activities, by streamlining processes, reducing test environments and allowing experts to perform their duties independent of application code cycles. This allows DevOps teams to work more efficiently, by allowing developers to focus on providing value to the company’s customers through applications and operators to provide value to their customers through improved customer experience, stability and security.

The properties of a service mesh can enable your organization to run more efficiently and reduce operating costs. Here are some ways a service mesh allows you to do this:

  • Canary testing of applications in production can eliminate expensive staging environments
  • Autoscaling of applications can ensure efficient use of resources.
  • Traffic management can eliminate duplicated coding efforts to implement retry-logic, load-balancing and service discovery.
  • Encryption and certificate management can be centralized to reduce overhead and the need to make application changes and redeployment for changing security policies.
  • Metrics and tracing gives teams access to the information they need for performance and capacity planning, and can help reduce rework and over-provisioning of resources.

As organizations continue to shift-left and embrace DevOps principles, it is important to have the right tools to enable teams to move as quickly and efficiently as possible. A service mesh helps teams achieve this by moving infrastructure-like features out of the individual services and into the platform. This allows teams to leverage them in a consistent and compliant manner; it allows Devs to be Devs and Ops to be Ops, so together they can truly realize the velocity of DevOps.

Reducing Mean-Time-To-Resolution

Like it or not, outages happen. And when they do, you need to be able to root-cause the problem, develop a fix and deploy it as quickly as possible to avoid violating your customer-facing SLAs and your internal SLOs. A service mesh is a critical piece of infrastructure when it comes to reducing your MTTR and ensuring the best possible user experience for your customers. Due to its unique position in the platform, sitting between the container orchestration and application, it has the unique ability to not only gather telemetry data and metrics, but also transparently implement policy and traffic management changes at run time. Here are some ways how:

  • Metrics can be collected by the proxy in a service mesh and used to understand where problems are in the application, show which services are underperforming or using too many resources, and help inform decisions on scaling and resource optimization.
  • Layer 7 traces can be collected throughout the application and correlated together, allowing teams to see exactly where in the call-flow failed.
  • Policy can allow platform teams to direct traffic — and in the case of outages, redirect traffic to other, healthier services.

All of this functionality can be collected and implemented consistently across services — and even clusters — without impacting the application or placing additional burden or requirements on application developers.

It has been said that a minute of downtime can cost an enterprise company up to $5600 per minute. In an extreme example, let’s think back to my meeting with the cable provider. If a service mesh could have enabled their team to get the right expert on the phone in half the time, they would have saved $2,016,000.00. That’s a big number, and more importantly, all of those engineers could have been home with their families that night, instead of in front of their monitors.

Harnessing Microservices Uncertainty

Harnessing the Power of Microservices to Overcome an Uncertain Marketplace

According to PwC’s 23rd Annual Global CEO Survey, the outlook for 2020 can be summarized in one word—uncertainty. According to the survey, only 27% of CEOs are “very confident” in their prospects for revenue growth in 2020, a low level not seen since 2009. As organizations navigate their way through digital transformations, how can they leverage strategies and applications that help them overcome uncertainty, rather than causing more?

In this article, we talk with Shawn Wormke, Incubation Lead at Aspen Mesh, about how a service mesh can help companies achieve agility with stability in order to overcome uncertainty for the future and get ahead in the marketplace.  

Q: Many CEOs are feeling a sense of uncertainty. More than half of the CEOs surveyed by PwC believe the rate of global GDP growth will decline in 2020. With this concern top of mind, how can applications like service mesh drive value for organizations by addressing security challenges, skills gaps and increased complexity at scale?

A: In times of uncertainty, the potential for flat to decreasing revenues adds intense pressure for CEOs to maintain and grow their business. One of the ways they can overcome this challenge is to focus on becoming more agile and delivering more value to their customers — faster than their competitors — with new products and services. Almost every company has embraced Agile workflows and is adopting cloud and container technologies like Kubernetes to enable this, but these new complex and distributed technologies come with their own set of challenges around operations and security.

One of the patterns for addressing those challenges is service mesh, and I believe it should be at the center of a company’s approach to microservice architectures. Because service meshes, like Istio and Aspen Mesh, are inserted next to every microservice, it enables a strategic point of control for operating and securing next-generation application architectures. By moving critical operations like encryption, certificate management, policy, traffic steering, high-availability, logging and tracing out of the application and into the platform — where it belongs — you can ensure that you have your human capital adding application value rather than managing infrastructure. 

Q: What cloud native technologies and enterprise architecture modernization strategies are you seeing organizations leverage in order to thrive in a quickly evolving marketplace?

A: Microservices architectures, and the container technologies like Kubernetes that enable them, have fundamentally changed the way applications are delivered and managed. These new patterns allow companies to efficiently scale both software and engineering, reduce risk, and improve the overall quality of applications.

These technologies are new and they present challenges like most nascent technologies. But, with the amazing work of open source projects and communities like Istio, Prometheus, Jaeger, Grafana and many others, there are solutions available to help overcome these challenges.

Q: A business’s agility is what allows them to rapidly grow its revenue streams, respond to customer needs and defend against disruption. But is that enough?

A: Agility is a company’s number one business advantage. It is a business’s agility that allows them to rapidly grow revenue, respond to customer needs, and defend against disruption. It is the need for agility that drives digital transformations and causes organizations to define new ways of working, develop new application architectures, and embrace cloud and container technologies.

However, agility with stability is the critical competitive advantage. Companies that can meet evolving customer needs — while staying out of the news for downtime and security breaches — will be the winners of tomorrow.

Q: How exactly does a service mesh get you agility with stability?

A: Because a service mesh sits below the application and above the network — and has a control plane that can consume and apply policy — it enables development and platform teams to work together while focusing on their specific areas of expertise so companies can deliver solutions to their customers that are performant, secure and compliant. 

In addition, a service mesh handles encryption and provides visibility into an application's behavior like no other technology. It can see the “final product” of the application or service as an outsider and provides a unique perspective on that service’s behavior, performance and communication patterns. This allows operators to operate, manage and scale those services based on their actual needs and the end users’ experience.

Q: Service mesh is a useful tool, but when does it make sense for an organization to consider adopting one?

A: It is true that the service mesh pattern has been around for a few years, but it is still an early market for the technology and surrounding products. Specifically at Aspen Mesh, we have been working with customers in this area for over two years and realize that each organization is different in their maturity and needs when it comes to Kubernetes and microservices. A company may adopt service mesh early in order to meet compliance needs. Some organizations run into challenges in production and need visibility, while others may need to reduce errors caused by lack of engineering expertise on their development teams.

In general, you probably need a service mesh if you can no longer draw your microservices topology on a sheet of paper, or hold it in your head. This usually happens for our customers at about 15-20 microservices.

Q: What are some of the top use cases you are seeing service mesh used for?

A: Kubernetes and containers is a journey for most organizations. Along that journey, road-blocks will be encountered that must be addressed. The most common path for the organizations we talk to involves:

  • Understanding what services they have deployed and how they are communicating,
  • How they can make their platform and applications comply with either company or regulatory requirements, and
  • How they ensure they are providing the best possible user experience and reducing downtime.

Therefore, the most common use cases we see our customers implementing a service mesh for include visibility, observability, and encryption of service-to-service communication. More recently, we've seen adoption increase for operational benefits that allow them to quickly identify, diagnose and resolve customer-impacting problems.

Q: What do you see for the future of service mesh? How will it help organizations overcome the challenges associated with uncertainty?

A: Service mesh is a new frontier, and despite all the recent attention, is still a nascent market and pattern. This is due to its strategic point of control in application architectures and its ability to operate in a transparent and distributed manner. More and more companies, as they move from proof-of-concept to production with their new application architectures, will come to rely on a service mesh to provide a consistent layer in which they are able to control and manage their services while ensuring that all applications are optimally performing and meeting compliance and security requirements.

As service meshes mature, they will become a critical piece of infrastructure that enables organizations to maximize their true competitive advantage of agility with stability.

If you’re scaling microservices on Kubernetes, it's worth considering a service mesh to help you get the most out of your distributed systems. To learn more about service mesh, feel free to reach out to our team to schedule a time to meet.

How A Service Mesh Can Make Application Delivery More Secure

How A Service Mesh Can Make Application Delivery More Secure

What is the biggest business advantage that today’s companies have? Their level of agility. 

A business’s agility is what allows them to rapidly grow their revenue streams, respond to customer needs and defend against disruption. It is the need for agility that drives digital transformations and pushes companies to define new ways of working, develop new application architectures and embrace cloud and container technologies.

But agility alone won’t get a business where they need to be; agility with stability is the critical competitive advantage. Companies that can move faster and rapidly meet evolving customer needs — while staying out of the news for downtime and security breaches — will be the winners of tomorrow.

Service meshes help organizations achieve agility with stability by increasing the visibility and observability of their microservices, allowing them to gain control over a complex solution and to enforce their applications’ security and compliance requirements. As companies continue to adopt cloud native technologies, they must not lose sight of ensuring that the applications they deliver are secure and compliant and a service mesh provides many components in its tool box that allows them to do that.

Let the Experts Be Experts

In order to ensure that applications are secure, organizations need security and compliance experts. And, those experts need to be leveraged to create business-wide policies that protect customer and company data. However, all too often in the DevOps world, the implementation and application of those policies is left to application teams that are already implementing the individual microservices that make up the larger application. The individual teams do not have the expertise or context to understand the larger security needs of the business, or worse, they may see security requirements as an impediment to delivering their code to production on schedule.

Service mesh can let experts be experts by allowing them to create security and authorization policies that can be applied as a transparent layer under the application services regardless of the application developer’s decisions. By creating this security layer, the burden of implementation becomes aligned with the people who have the most interest in its success. The friction is also removed from the people who are least invested. This allows the business to be confident that their applications are as compliant — and their data is as secure — as their risk profile requires.

Encryption and Identity for Zero Trust

Data needs to be protected at all times, not just while it is at rest in a database somewhere. This includes ensuring that data is encrypted while moving between microservices, regardless of whether that data hits the wire on the network. Protecting that data means that you know:

  1. Who has access to the data
  2. That you trust them
  3. That they are sending and receiving the data securely

Because a service mesh is a transparent infrastructure layer that sits between the network and the microservices, on that network is the perfect place to ensure data encryption, identity, trust and permission.

By deploying a service mesh, organizations can ensure a secure by default posture in a zero-trust environment without changing existing applications or burdening application developers with complex authentication schemes, certificate management or permission revocation and additions. By delegating those functionalities to the mesh, organizations can easily deploy a more secure and compliant application environment with greater efficiency, less overhead and more confidence in their security posture.

Find and Fix with a Service Mesh

Mistakes will happen and security policies will have holes in them. Organizations shouldn't expect people and the policies they create to be perfect, but they must expect that they find and fix those mistakes before others do and exploit them. Some of this can be done with tools and libraries that run inside of the application’s code or container, or with firewalls and other products that run in the physical network. But these techniques miss one key element: what is going on as the service’s requests are coming in and out of the application while those requests are inside of the cluster and its hosts.

A service mesh, especially Istio-based sidecar meshes like Aspen Mesh, provides organizations with a unique view into every microservice’s request/response behavior. Along with this additional visibility, you can understand the behavior of a service’s traffic before and after it leaves the application’s code and container to form a request trace from source to destination and back. Not only does this allow you to find anomalous requests, unknown traffic sources and destinations, it allows you and stop them from accessing services that they should not have access to through security and policy changes. Even more importantly, these policy changes can happen without directly impacting or changing the application, thus reducing the amount of time it takes to close security holes while lessening the overall risk of exploits.

As organizations continue to embrace cloud and container technologies — and their use of those technologies matures and scales — a service mesh will become a vital part of their security and compliance strategy.

Learn More About Securing Containerized Applications

Interested in learning more about service mesh and security? Fill out the form below to get the white paper on how a service mesh can help you adopt a Zero-Trust security posture for your containerized applications.


Service Mesh Insider: An Interview with Shawn Wormke

Have you ever wondered how we got to service mesh? What backgrounds, experiences and technologies led to the emergence of service mesh? 

We recently put together an interview with Aspen Mesh’s Founder, Shawn Wormke in order to get the inside scoop for you. Read on to find out the answers to these three questions:

  1. What role did your technical expertise play in how Aspen Mesh focuses on enterprise service mesh?
  2. Describe how your technical and business experience combined to create an enterprise startup and inform your understanding of how to run a “modern” software company?
  3. What characteristics define a “modern” enterprise, and how does Aspen Mesh contribute to making it a reality?

1. What role did your technical expertise play in how Aspen Mesh focuses on the enterprise?

I started my career at Cisco working in network security and firewalls on the ASA product line and later the Firewall Services Module for the Catalyst 6500/7600 series switches and routers. Both of these products were focused on the enterprise at a time when security was starting to move up the stack and become more and more distributed throughout the network. We were watching our customers move from L2 transparent firewalls to L3/L4 firewalls that required application logic in order to “fixup” dynamic ports for protocols like FTP, SIP and H.323. Eventually that journey up the stack continued to L7 firewalls that were doing URL, header and payload inspection to enforce security policy.

At the same time that this move up the stack was happening, customers were starting to look at migrating workloads to VMs and were demanding new form factors and valuing different performance metrics. No longer were speeds, feeds and dragstrip numbers important, the focus was shifting to footprint and elasticity. The result in this shift in priority was a change in mindset when it came to how enterprises were thinking about expenses. They started to think about shifting expenses from large capacity stranding CAPEX purchases to more frequent OPEX transactions that were aligned with a software-first approach.

It was this idea that led me to join as one of the first engineers at a small startup in Boulder, CO called LineRate Systems which was eventually acquired by F5 Networks. The company was founded on a passion for making high performance, lightweight application delivery (aka load balancing) software that was as fast as the industry standard hardware. Our realization was that Commodity Off the Shelf (COTS) hardware had so much performance that if leveraged properly it was possible to offer the same performance at a much lower cost.

But the big idea, the one that ultimately got us noticed by F5, was that if the hardware was freely available (everyone had racks and racks of servers), we could charge our customers for a performance range and let them stamp out the software--as much as they needed--to achieve that. This removed the risk of the transaction from the customer as they no longer had to pre-plan 3-5 years worth of capacity.  It placed the burden on the provider to deliver an efficient and API-first elastic platform and a pricing model that scaled along the same dimensions as their business needs.

After acquisition we started to use containers and eventually Kubernetes for some of our build and test infrastructure. The use of these technologies led us to realize that they were great for increasing velocity and agility, but were difficult to debug and secure. We had no record of what our test containers did or who they talked to at runtime and we had no idea what data they were accessing. If we had a way to make sense of all of this, life would be so much easier.

This led us to work on some internal projects that experimented with ideas that we all now know as service mesh. We even released a product that was the beginning of this called the Application Services Proxy, which we ultimately end-of-lifed in 2017 when we made the decision to create Aspen Mesh.

In 2018 Aspen Mesh was born as an F5 Incubation. It is a culmination of almost 20 years of solving network and security problems for some of the world's largest enterprise customers and ensuring that the form-factor, consumption and pricing models are flexible and grow along with the businesses that use it. It is acknowledgement that disruption is happening everywhere and that an organization’s agility and ability to respond to disruption is it's number one business asset. Companies are realizing this agility by redefining how they deliver value to their customers as quickly as possible using technologies like cloud, containers and Kubernetes.

We know that for enterprises, agility with stability is the number one competitive advantage. Through years of experience working on enterprise products we know that companies who can meet their evolving customer needs--while staying out of the news for downtime and security breaches--will be the winners of tomorrow. Aspen Mesh’s Enterprise Service Mesh enables enterprises to rapidly deliver value to their customers in a performant, secure and compliant way.

2. Describe how your technical and learned business experience combine to build an enterprise startup and inform your understanding of how best to run a “modern” software company?

Throughout my career I have been part of waterfall to agile transformations, worked on products that enabled business agility and now run a team that requires that same flexibility and business agility. We need to be focused on getting product to market that shows value to our customers as quickly as possible. We rely on automation to ensure that we are focusing our human capital on the most important tasks. We rely on data to make our decisions and ensure that the data we have is trustworthy and secure.

The great thing is that we get to be the ones doing the disrupting, and not the ones getting disrupted. What this means is we get to move fast and don’t have the burden of a large enterprise decision-making process. We can be agile and make mistakes, and we are actually expected to make mistakes. We are told "no" more than we are told "yes." But, learning from those failures and making course corrections along the way is key to our success.

Over the years I have come to embrace the power of open source and what it can do to accelerate projects and the impacts (both positive and negative) it can have on your company. I believe that in the future all applications will be born from open technologies. Companies that acknowledge and embrace this will be the most successful in the new cloud-native and open source world. How you choose to do that depends on your business and business model. You can be a consumer of OSS in your SaaS platform, an open-core product, glue multiple projects together to create a product or provide support and services; but if you are not including open source in your modern software company, you will not be successful.

Over the past 10 years we have seen and continue to see consumption models across all verticals rapidly evolve from perpetual NCR-based sales models with annual maintenance contracts to subscription or consumption based models to fully managed SaaS based offerings. I recently read an article on subscription based banking. This is driven from the desire to shift the risk to the producer instead of the consumer. It is a realization by companies that capacity planning for 3-5 years is impossible, and that laying out that cash is a huge risk to the business they are no longer willing to take. It is up to technology producers to provide enough value to attract customers and then continue providing value to them to retain them year over year.

Considering how you are going to offer your product in a way that scales with your customers value matrix and growth plans is critical. This applies to pricing as well as product functionality and performance.

Finally, I would be negligent if I didn’t mention data as a paramount consideration when running a modern software company. Insights derived from that data need to be at the center of everything you do. This goes not only for your product, but also your internal visibility and decision making processes. 

On the product side, when dealing with large enterprises it is critical to understand what your customers are willing to give you and how much value they need to realize in return. An enterprise's first answer will often be “no” when you tell them you need to access their data to run your product, but that just means you haven’t shown them enough value to say "yes." You need to consider what data you need, how much you need of it, where it will be stored and how you are protecting it.

On the internal side you need to measure everything. The biggest challenge I have found with an early-stage, small team is taking the time to enable these measurements. It is easy to drop off the list when you are trying to get new features out the door and you don’t yet know what you're going to do with the data. Resist that urge and force your teams to think about how they can do both, and if necessary take the time to slow down and put it in. Sometimes being thoughtful early on can help you go fast later, and adding hooks to gather and analyze data is one of those times.

Operating a successful modern software company requires you embrace all the cliches about wearing multiple hats and failing fast. It's also critical to focus on being agile, embrace open source, create a consumption based offering, and rely on data, data, data and more data.

3. What characteristics define a “modern” enterprise, and how does Aspen Mesh contribute to making it a reality?

The modern enterprise craves agility and considers it to be their number one business advantage. This agility is what allows the enterprise to deliver value to customers as quickly as possible. This agility is often derived from a greater reliance on technology to enable rapid speed to market. Enterprises are constantly defending against disruption from non-traditional startup companies with seemingly unlimited venture funding and no expectation of profitability. All the while the enterprise is required to compete and deliver value while maintaining the revenue and profitability goals that their shareholders have grown to expect over years of sustained growth. 

In order to remain competitive, enterprises are embracing new business models and looking for new ways to engage their customers through new digital channels. They are relying more on data and analytics to make business decisions and to make their teams and operations more efficient. Modern enterprises are embracing automation to perform mundane repetitive tasks and are turning over their workforce to gain the technical talent that allows them to compete with the smaller upstart disruptors in their space.

But agility without stability can be detrimental to an enterprise. As witnessed by many recent reports, enterprises can struggle with challenges around data and data security, perimeter breaches and downtime. It's easy to get caught up in the promise of the latest new technology, but moving fast and embracing new technology requires careful consideration for how it integrates into your organization, it's security posture and how it scales with your business. Finding a trusted partner to accompany you on your transformation journey is key to long term success.

Aspen Mesh is that technology partner when it comes to delivering next generation application architectures based on containers and Kubernetes. We understand the power and promise of agility and scalability that these technologies offer, but we also know that they introduce a new set of challenges for enterprises. These challenges include securing communication between services, observing and controlling service behavior and problems and managing the policy associated with services across large distributed organizations. 

Aspen Mesh provides a fully supported service mesh that is focused on enterprise use cases that include:

  • An advanced policy framework that allows users to describe business goals that are enforced in the application’s runtime environment
  • Role based policy management that enables organizations to create and apply policies according to their needs
  • A catalog of policies based on industry and security best practices that are created and tested by experts
  • Data analytics-based insights for enhanced troubleshooting and debugging
  • Predictive analytics to help teams detect and mitigate problems before they happen
  • Streamlined application deployment packages that provide a uniform approach to authentication and authorization, secure communications, and ingress and egress control
  • DevOps tools and workflow integration
  • A simplified user experience with improved organization and streamlined navigation to enable users to quickly find and mitigate failures and security issues
  • A consistent view of applications across multiple clouds to allow visibility from a global perspective to a hyper-local level
  • Graph visualizations of application relationships that enable teams to collaborate seamlessly on focused subsets of their infrastructure
  • Tabular representations surfacing details to find and remediate issues across multiple clusters running dozens or hundreds of services
  • A reduced-risk scalable consumption model that allows customers to pay as they grow

Thanks for reading! We hope that helps shed some light on what goes on behind the scenes at Aspen Mesh. And if you liked this post, feel free to subscribe to our blog in order to get updates when new articles are released.

DevOps and service mesh

How Service Mesh Enables DevOps

I spend most of my day talking to large companies about how they are transforming their businesses to compete in an increasingly disruptive environment. This isn’t anything new, anyone who has read Clayton Christensen’s Innovator’s Dilemma understands this. What’s most interesting to me is how companies are addressing disruption. Of course, they are creating new products to remain competitive with the disruptors, but they are also taking a page out of their smaller, more nimble competitors’ playbook and focusing on being more efficient.

Companies are transforming internal organizations and product architectures along a new axis of performance. They are finding more value in iterations, efficiency and incremental scaling which is forcing them to adopt DevOps methodologies. This focus on time-to-market is driving some of the most cutting-edge infrastructure technology that we have ever seen. Technologies like containers and Kubernetes; and a focus on stable, consistent and open APIs allow small teams to make amazing progress and move at the speeds they require. These technologies have reduced the friction and time to market and the result is the quickest adoption of a new technology that anyone has ever seen.

The adoption of these technologies isn’t perfect, and as companies deploy them at scale they realize that they have inadvertently increased complexity and de-centralized ownership and control. In many cases, it can be impossible to understand the entire system and everyone needs to be an expert in compliance and business needs. Ultimately this means that when everyone is responsible, no-one is accountable.

A service mesh enables DevOps by helping you to manage this complexity. It provides autonomy and freedom for development teams while simultaneously providing a place for teams of experts to enforce company standards for policy and security. It does this by providing a layer between your teams’ applications and the platform they are running on that allows platform operators a place to insert network services, enforce policy and collect telemetry and tracing data.

This empowers your development teams to make choices based on the problem they are solving rather than being concerned with the underlying infrastructure. Dev teams now have the freedom to deploy code without the fear of violating compliance or regulatory guidelines. Secure communication is handled outside of the application reducing complexity and risk. A service mesh also provides tools that developers can use to deploy new code and debug or troubleshoot problems when they come up.

For the platform operator, whose primary objective is to provide a stable, secure and scalable service to run applications, a service mesh provides uniformity through a standardization of visibility and tracing. Policy and authentication between services can be introduced outside of the application at runtime ensuring that applications are adhering to any regulatory requirements the business may have. Deploying Aspen Mesh provides a robust experiments workflow to enable development teams to test new services using real production traffic. Our platform also provides tools that reduce mean-time-to-detection (MTTD) and mean-time-to-resolution (MTTR) with advanced analytics that are part of our SaaS portal.

DevOps represents two teams, Development and Operations, coming together to deliver better products more rapidly. Service mesh is a glue that helps unite these teams and provides one place in the stack that you can manage microservices at runtime without changes to the application or cluster.

The result is a platform that empowers application developers to focus on their code, and allows operators to more easily provide developers with a resilient, scalable and secure environment.

The Path to Service Mesh

When we talk to people about service mesh, there are a few questions we’re always asked. These questions range from straightforward questions about the history of our project, to deep technical questions on why we made certain decisions for our product and architecture.

To answer those questions we’ll bring you a three-part blog series on our Aspen Mesh journey and why we chose to build on top of Istio.

To begin, I’ll focus on one of the questions I’m most commonly asked.

Why did you decide to focus on service mesh and what was the path that lead you there?

LineRate Systems: High-Performance Software Only Load Balancing

The journey starts with a small Boulder startup called LineRate Systems and the acquisition of that company by F5 Networks in 2013. Besides being one of the smartest and most talented engineering teams I have ever had the privilege of being part of, LineRate was a lightweight high-performing software-only L7 proxy. When I say high performance, I am talking about turning a server you already had in your datacenter 5 years ago into a high performance 20+ Gbps 200,000+ HTTP requests/second fully featured proxy.

While the performance was eye-catching and certainly opened doors for our customers, our hypothesis was that customers wanted to pay for capacity, not hardware. That insight would turn out to be LineRate’s key value proposition. This simple concept would allow customers the ability to change the way that they consumed and deployed load balancers in front of their applications.

To fulfill that need we delivered a product and business model that allowed our customers to replicate the software as many times as needed across COTS hardware, allowing them to get peak performance regardless of how many instances they used. If a customer needed more capacity they simply upgraded their subscription tier and deployed more copies of the product until they reached the bandwidth, request rate or transaction rates the license allowed.

This was attractive, and we had some success there, but soon we had a new insight…

Efficiency Over Performance

It became apparent to us that application architectures were changing and the value curve for our customers was changing along with them. We noticed in conversations with leading-edge teams that they were talking about concepts like efficiency, agility, velocity, footprint and horizontal scale. We also started to hear from innovators in the space about this new technology called Docker, and how it was going to change the way that applications and services were delivered.

The more we talked to these teams and thought about how we were developing our own internal applications the more we realized that a shift was happening. Teams were fundamentally changing how they were delivering their applications, and the result was our customers were beginning to care less about raw performance and more about distributed proxies. There were many benefits to this shift including reducing the failure domains of applications, increased flexibility in deployments and the ability for applications to store their proxy and network configuration as code alongside their application.

At the same time containers and container orchestration systems were just starting to come on the scene, so we went to work on delivering our LineRate product in a container with a new control plane and thinking deeply about how people would be delivering applications using these new technologies in the future.

These early conversations in 2015 drove us to think about what application delivery would look like in the future…

That Idea that Just Won’t Go Away

As we thought more about the future of application delivery, we began to focus on the concept of policy and network services in a cloud-native distributed application world. Even though we had many different priorities and projects to work on, the idea of a changing application landscape, cloud-native applications and DevOps based delivery models remained in the forefront of our minds.

There just has to be a market for something new in this space.

We came up with multiple projects that for various reasons never came to fruition. We lovingly referred to them as v1.0, v1.5, and v2.0. Each of these projects had unique approaches to solving challenges in distributed application architectures (microservices).

So we thought as big as we could. A next-gen ADC architecture: a control plane that’s totally API-driven and separate from the data plane. The data plane comes in any form you can think of: purpose-built hardware, software-on-COTS, or cloud-native components that live right near a microservice (like a service mesh). This infinitely scalable architecture smooths out all tradeoffs and works perfectly for any organization of any size doing any kind of work. Pretty ambitious, huh? We had fallen into the trap of being all things to all users.

Next, we refined our approach in “1.5”, and we decided to define a policy language… The key was defining that open-source policy interface and connecting that seamlessly to the datapath pieces that get the work done. In a truly open platform, some of those datapath pieces are open source too. There were a lot of moving parts that didn’t all fall into place at once; and in hindsight we should have seen some of them coming … The market wasn’t there yet, we didn’t have expertise in open source, and we had trouble describing what we were doing and why.

But the idea just kept burning in the back of our minds, and we didn’t give up…

For Version 2.0, we devised a plan that could help F5’s users who were getting started on their container journey. The technology was new and the market was just starting to mature, but we decided that customers would take three steps on their microservice journey:

  1. Experimenting - Testing applications in containers on a laptop, server or cloud instance.
  2. Production Planning - Identifying what technology is needed to start to enable developers to deploy container-based applications in production.
  3. Operating at Scale - Focus on increasing the observability, operability and security of container applications to reduce the mean-time-to-discovery (MTTD) and mean-time-to-resolution (MTTR) of outages.

We decided there was nothing we could do for experimenting customers, but for production planning, we could create an open source connector for container orchestration environments and BIG-IP. We called this the BIG-IP Container Connector, and we were able to solve existing F5 customers’ problems and start talking to them about the next step in their journey. The container connector team continues to this day to bridge the gap between ADC-as-you-know-it and fast-changing container orchestration environments.

We also started to work on a new lightweight containerized proxy called the Application Services Proxy, or ASP. Like Linkerd and Envoy, it was designed to help microservices talk to each other efficiently, flexibly and observably. Unlike Linkerd and Envoy, it didn’t have any open source community associated with it. We thought about our open source strategy and what it meant for the ASP.

At the same time, a change was taking place within F5…

Aspen Mesh - An F5 Innovation

As we worked on our go to market plans for ASP, F5 changed how it invests in new technologies and nascent markets through incubation programs. These two events, combined with the explosive growth in the container space, led us to the decision to commit to building a product on top of an existing open source service mesh. We picked Istio because of its attractive declarative policy language, scalable control-plane architecture and other things that we’ll cover in more depth as we go.

With a plan in place it was time to pitch our idea for the incubator to the powers that be. Aspen Mesh is the result of that pitch and the end of one journey, and the first step on a new one…

Parts two and three of this series will focus on why we decided to use Istio for our service mesh core and what you can expect to see over the coming months as we build the most fully supported enterprise service mesh on the market.

Aspen Mesh Enterprise Service Mesh

Introducing Aspen Mesh - The Enterprise Service Mesh

Today we are very excited to introduce Aspen Mesh, an enterprise service mesh built on the open source project Istio. After talking to development and operations teams it became clear that microservices are great for development velocity, but the complexity and risk in these architectures lies in the service-to-service communication that microservices depend on. We have taken an application first approach to provide a communication fabric for microservices, called a service mesh. Our supported service mesh DevOps teams have the flexibility and autonomy they desire while providing the policy, visibility and insights into their microservice environment that operations teams demand for production-grade applications.

What is Aspen Mesh?

It’s a service mesh.

I know what you are thinking… “So what?”

We’ll have plenty more to say about that in the future, but for now think about all the network, security and telemetry services you use for your traditional monolithic applications.

Now think about your plans for microservices. Maybe you plan to have 10, 50, 100 or 1000’s of services running in your Kubernetes cluster. How do you get all those services in your new microservice and container environments in an efficient, uniform way.

Do you know who is talking to who and if they are allowed to? Is that communication secure? How do you debug something when it goes down? How do you add tracing or logging without touching all your applications? Do you know what the performance or quality impacts of releasing a new version of one of those services is on the upstream and downstream services?

A service mesh helps answer those questions. As a transparent infrastructure layer that is inserted between your microservice and the network a service mesh gives you a single point in the communication path of your applications to insert services and gather telemetry. You can do this without requiring changes to your applications.

What are Aspen Mesh’s Benefits Over Open Source?

We think open source is great! In fact, we think some projects are so awesome that we decided to use them in our product. Aspen Mesh is built on an open core model and our Enterprise Service Mesh is a packaged and supported distribution of Istio and Envoy.

Because having a choice is important, we have taken a unique approach to our product that allows you the most flexibility in how you deploy a service mesh in your environment. Aspen Mesh consists of our hosted SaaS platform for visibility, analytics and policy management and our supported Enterprise Service Mesh distribution.

Aspen Mesh’s Enterprise Service Mesh Distribution can be deployed by customers who require product support and services for their production systems. We version, build, package, test and document our distribution and we fully support our customers throughout their microservices journey. Using our distribution of Istio gives you access to our feature set in both the service mesh as well as our hosted portal, and it is fully supported.

Our Hosted SaaS Platform can be used with the community version of Istio. So if you are passionate about using open source, just exploring the concepts of containers and service mesh, or have already deployed Istio, using the portal alone is an option. As an open source user you get visibility, predictive analytics and policy management as well as a hosted option for logging and tracing infrastructure. Our enterprise customers have access to features and functionality that can only be provided when using our enterprise distribution.

How Do I Get Started with Aspen Mesh?

The concept of service mesh is brand new. In fact, until 2018 was declared “The Year of the Service Mesh” at KubeCon in December, most people had never heard of a service mesh. But, we have been working on this concept in different ways for a while now and are able to offer early access to the product for interested customers.

We are looking for teams on their container journey who are looking to solve real problems with their applications. We need partners who are excited to work with us and understand the value of a strong relationship.

Not everyone is cut out for the next big thing, but if you think you are up to the challenge we would love to talk to you and your team.

Join our early access program today.