What is the benefit of being an Early Access User?

Request Early Access to the Aspen App Intelligence Platform to use the 360° Performance Insights for cloud-native applications. Interact directly with our product and engineering teams to shape our product and ensure it meets your needs in the future. As we unroll new capabilities, you will be the first to try them out and see how they work in your environment.  
Talk to our Product team if you have recommendations or questions at any time, we enjoy collaborating with teams!  

What is the Aspen Mesh product?

Our cloud-based Aspen App Intelligence Platform combines the data and tools already available from your Kubernetes and Istio environments with our adaptive AI engine to ensure your applications are available and meet your user’s performance expectations. The Platform learns your applications’ behavior and optimizes that behavior to ensure you are consistently meeting your organization’s SLOs. We automatically detect issues and recommend actions and remediations before they become a problem – with zero upfront configuration. 

Using our platform, you can optimize applications across multiple clusters and predict changes in demand and risk so you can add capacity or make changes before they become critical or cause outages. Eliminate the risk of costly production failures as your development teams roll out changes by conducting continuous deployments that utilize our AI-based behavior comparisons to know whether a change has caused the app’s behavior to move outside of what is normal. Our platform provides unmatched, easy to understand application performance insights for your entire organization — from developers to the C-suite. 

Why would I use the Aspen Mesh product today?

You know Kubernetes and service mesh are powerful tools in your toolbox, but to get the most out of them, you need something more. Your team knows the challenges of running a distributed multi-cluster microservice environment at scale. The complexity is too much for one person or a team to handle on their own. You recognize the power of the data that is coming from these systems and know there must be a better way to tap into it. Applying AI and ML to this telemetry data is key to making better decisions. 

When you become an early access user, you can be part of defining a solution that works for you. You will have unlimited access to our product owners and engineering team and your input will help shape our product. We will be first to take advantage of our platform’s ML modeling capability and will see how the recommendations that our adaptive engine improve user experience and application performance.

Who uses Aspen Mesh?

The Aspen App Intelligence Platform is used by developers, DevOps and Platform teams who deliver their applications using Kubernetes and service mesh technologies at scale. These teams are accountable for their applications’ performance, stability and availability, and they are using Aspen Mesh to provide insights and recommendations into their complex microservice environments to ensure they are delivering the experience their customers expect and meeting their obligations to the business.

How do I get started with Aspen Mesh?

Getting started is easy. Request Early Access now and we will send you an email to get started.

What does Aspen Mesh cost?

We are looking for early users of our platform and thought leaders who will help shape the future of our product and solutions through feedback and collaboration. In exchange for your use of our platform and offering feedback to our product and engineering teams, you can use the product unrestricted for 12 months. 

How does Aspen Mesh get data from my cluster?

Aspen Mesh uses a lightweight agent in each cluster to collect telemetry data and send it to the Aspen App Intelligence Platform. This agent collects telemetry data from Envoy proxies, the Istio control plane, and Kubernetes. It then pre-processes this data before sending it to the Management Console. Although no sensitive data is collected, you can configure the pre-processor to redact any data that you want to before it leaves your cluster. 

All data collection and analysis are done using OpenTelemetry APIs and SDKs. This ensures that your telemetry data flows follow open protocols, leaving you free to leverage tools of your choice. 

I already have Datadog and Grafana for data monitoring, what value does Aspen Mesh give me?

DataDog, Grafana, and other Application Performance Management (APM) tools are built with infrastructure monitoring in mind. Too often, developers get data dense dashboards with little meaningful data available to reliably operate an application. 

The Aspen App Intelligence Platform is different. We understand that developers have SLOs and error budgets that need to be met but aren’t given tools to help them manage them. Aspen Mesh gives developers the meaningful insights they need to operate their applications, allowing developers to focus more on delivering business value and less on operational tasks. 

Can Aspen Mesh help me determine when to scale applications?

Autoscaling in Kubernetes doesn’t work as people expect. In production environments, determining when to scale out or in is a complex decision that requires much more insight than CPU or memory utilization. Considerations such as application behavior, end user behavior, and correlated events need to be considered for scaling applications. 

The Aspen App Intelligence Platform provides the ability to proactively scale your applications based on the rich telemetry data available. Aspen Mesh learns your applications’ behavior over time, as well as dependencies on other workloads and systems, to allow you to reliably and confidently scale your applications to achieve your SLOs. For more information on Predictive Scaling, see this white paper. 

What is Aspen Mesh’s relationship with F5?

Aspen Mesh is an incubation within F5. As such, Aspen Mesh is creating disruptive technology in the cloud native arena, leveraging a service mesh to create a new way of operating your applications. Aspen Mesh operates autonomously to ensure successful operations but can still leverage the experience and global resources of F5.

What are the Technical Requirements to Use Aspen Mesh?

Currently, the Aspen App Intelligence Platform uses telemetry data from Kubernetes, Istio control plane, and Envoy to model and analyze your applications. Aspen Mesh uses OpenTelemetry to collect data and send it to the Aspen Intelligence platform. If you already have Prometheus or an OpenTelemetry Collector installed in your clusters, then you are all set.  Otherwise, you can install the open-source Aspen Mesh Collector in your clusters to collect data. 

In the future, we will expand our telemetry sources to other service meshes, network path components such as load balancers, and even auto-instrumentation of your applications. If you need to add a specific telemetry source, contact us. 

How does Aspen Mesh help my continuous delivery processes?

Continuous Delivery is a powerful practice that greatly improves your delivery cadence and reliability. However, there is still a gap between meaningful data that you need to ensure confidence, and the data that is used with current tools. 

Aspen Mesh accelerates your CD practice by giving your tooling meaningful insights into the behavior of your application while rolling out new versions. This allows you and your business partners to increase the confidence of an application’s readiness. Learn more about our Continuous Delivery solution here. 

Does Aspen Mesh work with multiple clusters?

Absolutely! Multi-cluster management is a very common challenge. Some tools exist to help manage the actual clusters but managing an application’s lifecycle across many Kubernetes and Istio clusters is tedious and error prone at best.  

The Aspen App Intelligence Platform is built to give developers full insight into their application’s lifecycle. Applications deployed across multiple clusters and regions, interacting with other workloads and systems, is an essential configuration for cloud-native applications. Aspen Mesh lets you visualize and manage your applications across multiple clusters, see where traffic is flowing, visualize bottlenecks and their potential causes, and much more. Find more information on multi-cluster applications here. 

If you want to deep dive into how service mesh can help you more effectively manage microservices, get a complimentary ebook on Getting The Most Out Of Service Mesh.



Aspen Mesh: a fully supported enterprise service mesh that adds traffic management and security capabilities to Istio.

Authentication (AuthN): a way to verify the identity of an actor seeking access to protected data. A service mesh can authorize and authenticate requests made from both outside and within the app, sending only validated requests to service instances.

Authorization (AuthZ): a way to verify that an actor is allowed to access the requested protected data. A service mesh can authorize and authenticate requests made from both outside and within the app, sending only validated requests to service instances.

Bi-model IT: according to Gartner, is the ability to deliver on both traditional IT applications with a focus on stability and uptime, and newer, more agile but possibly less tested applications through newer methods involving things like the ability of developers to self-provision machines and short development cycles.

Brownfield deployment: in contrast to a greenfield deployment, a brownfield deployment is an upgrade or addition to an existing network that uses some legacy components.

Container: a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

Container orchestration framework: as more containers are added to an application’s infrastructure, a separate tool for monitoring and managing the set of containers – a container orchestration framework – becomes essential. Kubernetes is most users’ tool of choice.

Encryption: a service mesh can encrypt and decrypt requests and responses, removing that burden from each of the services. The service mesh can also improve performance by prioritizing the reuse of existing, persistent connections, reducing the need for the computationally expensive creation of new ones.

Greenfield deployment: in networking, a greenfield deployment is the installation and configuration of a network where none existed before.

Istio: an open source service mesh based on Kubernetes that leverages a sidecar proxy architecture to make it easy to connect, secure, control, and observe services.

Kubernetes (K8s): a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. Kubernetes services, support, and tools are widely available. Google open-sourced the Kubernetes project in 2014.

Load balancing: in a service mesh, load balancing works from the bottom up. The list of available instances maintained by the service mesh is stack‑ranked to put the least busy instances – that’s the load balancing part – at the top.

Low-code platforms: tend to be much more synchronized with the technology governance requirements of your wider enterprise IT organization. They offer scalable architectures, the ability to extend platform capabilities with open APIs for reusability, and more flexibility when it comes to cloud and on-premises deployment. In addition, they enable developers to exercise control with application testing, quality and performance tooling while incorporating the high productivity techniques seen in no-code solutions to speed development through visual means.

Microservices applications: a software development technique—a variant of the service-oriented architecture (SOA) architectural style that structures an application as a collection of loosely coupled services. In a microservices architecture, services are fine-grained and the protocols are lightweight.

Monitoring: a method to report the overall health of systems. Monitoring is best limited to key business and systems metrics derived from time-series based instrumentation, known failure modes and blackbox tests.

Observability: a step beyond monitoring, observability provides highly granular insights into the behavior of systems along with rich context, perfect for debugging purposes.

Role Based Access Control (RBAC): a method of restricting network access based on the roles of individual users within an enterprise. RBAC lets employees have access rights only to the information they need to do their jobs and prevents them from accessing information that doesn’t pertain to them.

SCO: software delivery and operational performance, or SDO performance. Those include increased profitability, productivity, market share, customer satisfaction, and the ability to achieve organization and mission goals.

Service discovery: when an instance needs to interact with a different service, it needs to find – discover – a healthy, available instance of the other service. The container management framework keeps a list of instances that are ready to receive requests.

Service mesh: a configurable infrastructure layer for a microservices application. It makes communication between service instances flexible, reliable, and fast. The service mesh is usually implemented by providing a proxy instance, called a sidecar, for each service instance. The service mesh exists to provide solutions to the challenges of ensuring reliability (retries, timeouts, mitigating cascading failures), troubleshooting (observability, monitoring, tracing, diagnostics), performance (throughput, latency, load balancing), security (managing secrets, ensuring encryption), dynamic topology (service discovery, custom routing), and other issues commonly encountered when managing microservices in production

Services vs. service instances: to be precise, what developers create is not a service, but a service definition or template for service instances. The app creates service instances from these, and the instances do the actual work. However, the term service is often used for both the instance definitions and the instances themselves.

Sidecar: a service mesh is usually implemented by providing a proxy instance, called a sidecar, for each service instance. Sidecars handle inter‑service communications, monitoring, security‑related concerns – anything that can be abstracted away from the individual services. This way, developers can handle development, support, and maintenance for the application code in the services; operations can maintain the service mesh and run the app.

Sidecar proxy: a proxy instance that’s dedicated to a specific service instance. It communicates with other sidecar proxies and is managed by the orchestration framework.

Get in Touch

We would like to hear from you.