Expanding Service Mesh Without Envoy

Istio uses the Envoy sidecar proxy to handle traffic within the service mesh.  The following article describes how to use an external proxy, F5 BIG-IP, to integrate with an Istio service mesh without having to use Envoy for the external proxy.  This can provide a method to extend the service mesh to services where it is not possible to deploy an Envoy proxy.

This method could be used to secure a legacy database to only allow authorized connections from a legacy app that is running in Istio, but not allow any other applications to connect.

Securing Legacy Protocols

A common problem that customers face when deploying a service mesh is how to restrict access to an external service to a limited set of services in the mesh.  When all services can run on any nodes it is not possible to restrict access by IP address (“good container” comes from the same IP as “malicious container”).

One method of securing the connection is to isolate an egress gateway to a dedicated node and restrict traffic to the database from those nodes.  This is described in Istio’s documentation:

Istio cannot securely enforce that all egress traffic actually flows through the egress gateways. Istio only enables such flow through its sidecar proxies. If attackers bypass the sidecar proxy, they could directly access external services without traversing the egress gateway. Thus, the attackers escape Istio’s control and monitoring. The cluster administrator or the cloud provider must ensure that no traffic leaves the mesh bypassing the egress gateway.

   -- https://istio.io/docs/examples/advanced-gateways/egress-gateway/#additional-security-considerations (2019-03-25)

Another method would be to use mesh expansion to install Envoy onto the VM that is hosting your database. In this scenario the Envoy proxy on the database server would validate requests prior to forwarding them to the database.

The third method that we will cover will be to deploy a BIG-IP to act as an egress device that is external to the service mesh.  This is a hybrid of mesh expansion and multicluster mesh.

Mesh Expansion Without Envoy

Under the covers Envoy is using mutual TLS to secure communication between proxies.  To participate in the mesh, the proxy must use certificates that are trusted by Istio; this is how VM mesh expansion and multicluster service mesh are configured with Envoy.  To use an alternate proxy we need to have the ability to use certificates that are trusted by Istio.

Example of Extending Without Envoy

A proof-of-concept of extending the mesh can be taken with the following example.  We will create an “echo” service that is TCP based that will live outside of the service mesh.  The goal will be to restrict access to only allow authorized “good containers” to connect to the “echo” service via the BIG-IP.  The steps involved.

  1. Retrieve/Create certificates trusted by Istio
  2. Configure external proxy (BIG-IP) to use trusted certificates and only trust Istio certificates
  3. Add policy to external proxy to only allow “good containers” to connect
  4. Register BIG-IP device as a member of the Istio service mesh
  5. Verify that “good container” can connect to “echo” and “bad container” cannot

First we install a set of certificates on the BIG-IP that Envoy will trust and configure the BIG-IP to only allow connections from Istio.  The certs could either be pulled directly from Kubernetes (similar to setting up mesh expansion) or generated by a common CA that is trusted by Istio (similar to multicluster service mesh).

Once the certs are retrieved/generated we install them onto the proxy, BIG-IP, and configure the device to only trust client side certificates that are generated by Istio.

To enable a policy to validate the identity of the “good container” we will inspect the X509 Subject Alternative Name fields of the client certificate to inspect the spiffe name that contains the identity of the container.

Once the external proxy is configured we can register the device using “istioctl register” (similar to mesh expansion).

To verify that our test scenario is working we will have two namespaces “default” and “trusted”.  Connections from “trusted” will be allowed and “default” will be reject.  From each namespace we create a pod and run the command “nc bigip.default.svc.cluster.local 9000”.  Looking at our BIG-IP logs we can verify that our policy (iRule) worked:

Mar 25 18:56:39 ip-10-1-1-7 info tmm5[17954]: Rule /Common/log_cert <CLIENTSSL_CLIENTCERT>: allowing: spiffe://cluster.local/ns/trusted/sa/sleep
Mar 25 18:57:00 ip-10-1-1-7 info tmm2[17954]: Rule /Common/log_cert <CLIENTSSL_CLIENTCERT>: rejecting spiffe://cluster.local/ns/default/sa/default

Connection from our “good container”

/ # nc bigip.default.svc.cluster.local
9000
hi
HI

Connection from our “bad container”

# nc bigip.default.svc.cluster.local 9000

In the case of the “bad container” we are unable to connect.  The “nc”, netcat, command is simulating a very basic TCP client.  A more realistic example would be connecting to an external database that contains sensitive data.  In the “good” example we are echo’ing back the capitalized input (“hi” becomes “HI”).

Just One Example

In this article we looked at expanding a service mesh without Envoy.  This was focused on egress TCP traffic, but it could be expanded to:

  • Using BIG-IP as an SNI proxy instead of NGINX
  • Securing inbound traffic using mTLS and/or JWT tokens
  • Using BIG-IP as an ingress gateway
  • Using ServiceEntry/DestinationRules instead of registered service

If you want to see the process in action, check out this short video walkthrough.

https://youtu.be/83GdmwTvWLI

Let me know in the comments whether you’re interested in any of these use-cases or come-up with your own.  Thank you!


Using D3 in React: A Pattern for Using Data Visualization at Scale

Data visualization is an important part of what we do at Aspen Mesh. When you implement a service mesh, it provides a huge trove of data about your services. Everything you need to know about how your services are communicating is available, but separating the signal from the noise is essential. Data visualization is a powerful tool to distill complex data sets into simple, actionable visuals. To build these visualizations, we use React and D3. React is great for managing a large application and organizing code into discrete components to keep you sane. D3 is magical for visuals on large data sets. Unfortunately, much of the usefulness of each library is lost and bugs are easy to come by when they are not conscientiously put together. In the following article, I will detail the pattern that I work with to build straight forward D3 based visualization components that fit easily within a large scale React application.

Evaluating React + D3 Patterns

The difficulty in putting both libraries together is each has its own way of manipulating the DOM. React through JSX and its Virtual DOM and D3 through .append(). The simplest way to combine them is to let them do their own thing in isolation and act as black boxes to each other. I did not like this approach because it felt like jamming a separate D3 application inside of our existing React app. The code was structured differently, it had to be tested differently and it was difficult to use existing React components and event handlers. I kept researching and playing around with the code until I came on a pattern that addresses those issues. It enables React to track everything in the Virtual DOM and still allows D3 to do what it does best.

Key Aspects:

  • Allow React to handle entering and exiting elements so it can keep track of everything in the Virtual DOM.
  • Code structured and testsed the same way as the rest of the React app.
  • Utilize React lifecycle methods and key attribute to emulate D3 data joins.
  • Manipulate and update element attributes through D3 by selecting the React ref object.
  • D3 for all the tough math. Scales, axes, transitions.

To illustrate the pattern, I will build out a bar graph that accepts an updating data set and transitions between them. The chart has an x axis based on date, and a y axis based on a numerical value. Each data points looks like this:

interface Data {
  id: number;
  date: string;
  value: number;
}

I'll focus on the core components, but to run it and see it all working together, check out the git repo.

SVG Component

The root element to our chart is an SVG. The SVG is rendered through JSX, and subsequent chart elements, such as the axes and the bar elements, are passed in as child components. The SVG is responsible for setting the size and margin and dictating that to its child elements. It also creates scales based on the data and available size and passing the scales down. The SVG component can handle resizing as well as binding D3 panning and zooming functionality. I won't illustrate zooming and panning here, but if you're interested, check out this component. The basic SVG component looks like this.

interface SVGProps {
  svgHeight: number;
  svgWidth: number;
  data: Data[];
}


export default class Svg extends React.Component<SVGProps> {
  render() {
    const { svgHeight, svgWidth, data } = this.props;

    const margin = { top: 20, right: 20, bottom: 30, left: 40 };
    const width = svgWidth - margin.left - margin.right;
    const height = svgHeight - margin.top - margin.bottom;

    const xScale = d3
      .scaleBand()
      .range([0, width])
      .padding(0.1);

    const yScale = d3.scaleLinear().range([height, 0]);

    xScale.domain(data.map(d => d.date));
    yScale.domain([0, d3.max(data, d => d.value) || 0]);

    const axisBottomProps = {
      height,
      scale: xScale
    };
    const axisLeftProps = { scale: yScale };

    const barProps = {
      height,
      width,
      xScale,
      yScale,
      data
    };

    return (
      <svg height={svgHeight} width={svgWidth}>
        <g transform={`translate(${margin.left},${margin.top})`}>
          <AxisBottom {...axisBottomProps} />
          <AxisLeft {...axisLeftProps} />
          <Bars {...barProps} />
        </g>
      </svg>
    );
  }
}

Axes

There are two axis components for the left and bottom axes, and they receive the corresponding D3 scale object as a prop. The React ref object is key to linking up D3 and React. React will render the element, and so keep track of it in the Virtual DOM, and will then pass the ref to D3 so it can manage all the complex attribute math. On componentDidMount call, D3 selects the ref object and then calls the corresponding axis function on it, building the axis. On componentDidUpdate, the axis is redrawn with the updated scale after a D3 transition() to give it a smooth animation. The bottom axis component looks as follows:

interface AxisProps {
  scale: d3.ScaleLinear<any, any>;
}

export default class Axis extends React.Component<AxisProps> {
  ref: React.RefObject<SVGGElement>;

  constructor(props: AxisProps) {
    super(props);
    this.ref = React.createRef();
  }

  componentDidMount() {
    if (this.ref.current) {
      d3.select(this.ref.current).call(d3.axisLeft(this.props.scale));
    }
  }

  componentDidUpdate() {
    if (this.ref.current) {
      d3.select(this.ref.current)
        .transition()
        .call(d3.axisLeft(this.props.scale));
    }
  }

  render() {
    return <g ref={this.ref} />;
  }
}

Rendering with Bars with React Data Joins

The Bars element illustrates how to emulate D3's data join functionality through React lifecycle methods and its key attribute. Data joins allow us to map DOM elements to specific data points and to recognize when those data points enter our set, exit, or are updated by a change in the data set. It is a powerful way to visually represent data constancy between changing data sets. It also allows us to update our chart and only redraw elements that change instead of redrawing the entire graph. Using D3 data joins, with the .enter() or .exit() methods, requires us to append elements through D3 outside of React's Virtual DOM and generally ruins everything. To get around this limitation, we can instead mimic D3 data joins through React's lifecycle methods and its own diffing algorithm. The functions that would be run on .enter() can be executed inside of componentDidMount, updates in componentDidUpdate, and .exit() in componentWillUnmount. Running transitions in componentWillUnmount requires using React Transitions to delay the element from being removed from the DOM until the transition has run. The necessary element for React to map an element to a data point, in this case a bar to a number and a date, is the component's key attribute. By making the key attribute a unique value for each data point, React can recognize through its diffing algorithm if that element needs to be added in, removed, or just updated based on the data point it represents. The key attribute works exactly the same as the key function that would be passed to D3's .data() function.

In this example, two components are created to render the bars on the chart. The first component, Bars, will map over each data point and a render a corresponding Bar component. It binds each data point to the Bar component through the datum prop and assigns a unique key attribute, in this case, the data points unique id.

interface BarsProps {
  data: Data[];
  height: number;
  width: number;
  xScale: d3.ScaleBand<any>;
  yScale: d3.ScaleLinear<any, any>;
}

class Bars extends React.Component<BarsProps> {
  render() {
    const { data, height, width, xScale, yScale } = this.props;
    const barProps = {
      height,
      width,
      xScale,
      yScale
    };
    const bars = data.map(datum => {
      return <Bar key={datum.id} {...barProps} datum={datum} />;
    });
    return <g className="bars">{bars}</g>;
  }
}

The Bar component renders a <rect /> element and passes the ref object to D3 in its lifecycle methods. The lifecycle methods then operate on the element's attributes in familiar D3 dot notation.

interface BarProps {
  datum: Data;
  height: number;
  width: number;
  xScale: d3.ScaleBand<any>;
  yScale: d3.ScaleLinear<any, any>;
}

class Bar extends React.Component<BarProps> {
  ref: React.RefObject<SVGRectElement>;

  constructor(props: BarProps) {
    super(props);
    this.ref = React.createRef();
  }

  componentDidMount() {
    const { height, datum, yScale, xScale } = this.props;

    d3.select(this.ref.current)
      .attr("x", xScale(datum.date) || 0)
      .attr("y", yScale(datum.value) || 0)
      .attr("fill", "green")
      .attr("height", 0)
      .transition()
      .attr("height", height - yScale(datum.value));
  }

  componentDidUpdate() {
    const { datum, xScale, yScale, height } = this.props;
    d3.select(this.ref.current)
      .attr("fill", "blue")
      .transition()
      .attr("x", xScale(datum.date) || 0)
      .attr("y", yScale(datum.value) || 0)
      .attr("height", height - yScale(datum.value));
  }

  render() {
    const { xScale } = this.props;
    const attributes = {
      width: xScale.bandwidth()
    };
    return <rect data-testid="bar" {...attributes} ref={this.ref} />;
  }
}

Testing

By rendering everything through the React Virtual DOM, we can run tests on it with the same setup as we would test our other components. This test setup checks that each data point is represented as a bar in the SVG. Two data points are given intially, and then the component is rerendered with only one of the data points. We test that there are two green bars from the initial mount. Then we test that the update is applied correctly and we only have a single blue bar.

import React from "react";
import "jest-dom/extend-expect";
import { render } from "react-testing-library";
import Svg from "../Svg";

it("renders a bar for each data point", () => {
  const svgHeight = 500;
  const svgWidth = 500;
  const data = [
    { id: 1, date: "9/19/2018", value: 1 },
    { id: 2, date: "11/23/2018", value: 33 }
  ];

  const barProps = {
    svgHeight,
    svgWidth,
    data
  };

  const barProps2 = {
    ...barProps,
    data: [data[0]]
  };

  const { rerender, getAllByTestId, getByTestId } = render(
    <Svg {...barProps} />
  );
  expect(getAllByTestId("bar").length).toBe(2);
  expect(getByTestId("bar")).toHaveAttribute("fill", "green");

  rerender(<Svg {...barProps2} />);

  expect(getAllByTestId("bar").length).toBe(1);
  expect(getByTestId("bar")).toHaveAttribute("fill", "blue");
});

I like this pattern a lot. It fits really nicely into the existing production React app and it allows for recognizable code patterns by encouraging building components for each element in a D3 visualization. It's a smaller learning curve for React developers to building large amounts of D3 and we can use existing display components and event systems within D3 managed visualizations. By allowing D3 to manage the attributes, we can still use advanced features like transitions animations, panning and zooming.

Creating a UI for a service mesh requires managing a lot of complex data and then representing that data in intuitive ways. By combining React and D3 judiciously, we can allow React to do what it does best and manage large application state and then let D3 shine by creating excellent visualizations.

If you want to check out what the final product looks like, check out the Aspen Mesh beta. It's free and easy to sign up for.


Why Service Meshes, Orchestrators Are Do or Die for Cloud Native Deployments

The self-contained, ephemeral nature of microservices comes with some serious upside, but keeping track of every single one is a challenge, especially when trying to figure out how the rest are affected when a single microservice goes down. The end result is that if you’re operating or developing in a microservices architecture, there’s a good chance part of your days are spent wondering what the hell your services are up to.

With the adoption of microservices, problems also emerge due to the sheer number of services that exist in large systems. Problems like security, load balancing, monitoring and rate limiting that had to be solved once for a monolith, now have to be handled separately for each service.

The good news is that engineers love a good challenge. And almost as quickly as they are creating new problems with microservices, they are addressing those problems with emerging microservices tools and technology patterns. Maybe the emergence of microservices is just a smart play by engineers to ensure job security.

Today’s cloud native darling, Kubernetes, eases many of the challenges that come with microservices. Auto-scheduling, horizontal scaling and service discovery solve the majority of build-and-deploy problems you’ll encounter with microservices.

What Kubernetes leaves unsolved is a few key containerized application runtime issues. That’s where a service mesh steps in. Let’s take a look at what Kubernetes provides, and how Istio adds to Kubernetes to solve the microservices runtime issues.

Kubernetes Solves Build-and-Deploy Challenges

Managing microservices at runtime is a major challenge. A service mesh helps alleviate this challenge by providing observability, control and security for your containerized applications. Aspen Mesh is the fully supported distribution of Istio that makes service mesh simple and enterprise-ready.

Kubernetes supports a microservice architecture by enabling developers to abstract away the functionality of a set of pods, and expose services to other developers through a well-defined API. Kubernetes enables L4 load balancing, but it doesn’t help with higher-level problems, such as L7 metrics, traffic splitting, rate limiting and circuit breaking.

Service Mesh Addresses Challenges of Managing Traffic at Runtime

Service mesh helps address many of the challenges that arise when your application is being consumed by the end user. Being able to monitor what services are communicating with each other, if those communications are secure and being able to control the service-to-service communication in your clusters are key to ensuring applications are running securely and resiliently.

Istio also provides a consistent view across a microservices architecture by generating uniform metrics throughout. It removes the need to reconcile different types of metrics emitted by various runtime agents, or add arbitrary agents to gather metrics for legacy un-instrumented apps. It adds a level of observability across your polyglot services and clusters that is unachievable at such a fine-grained level with any other tool.

Istio also adds a much deeper level of security. While Kubernetes only provides basic secret distribution and control-plane certificate management, Istio provides mTLS capabilities so you can encrypt on the wire traffic to ensure your service-to-service communications are secure.

A Match Made in Heaven

Pairing Kubernetes with a service mesh-like Istio gives you the best of both worlds and since Istio was made to run on Kubernetes, the two work together seamlessly. You can use Kubernetes to manage all of your build and deploy needs and Istio takes care of the important runtime issues.

Kubernetes has matured to a point that most enterprises are using it for container orchestration. Currently, there are 74 CNCF-certified service providers — which is a testament to the fact that there is a large and growing market. I see Istio as an extension of Kubernetes and a next step to solving more challenges in what feels like a single package.

Already, Istio is quickly maturing and is starting to see more adoption in the enterprise. It’s likely that in 2019 we will see Istio emerge as the service mesh standard for enterprises in much the same way Kubernetes has emerged as the standard for container orchestration.


Running Stateful Apps with Service Mesh: Kubernetes Cassandra with Istio mTLS Enabled

Cassandra is a popular, heavy-load, highly performant, distributed NoSQL database.  It is fully integrated into many mainstay cloud and cloud-native architectures. At companies such as Netflix and Spotify, Cassandra clusters provide continuous availability, fault tolerance, resiliency and scalability.

Critical and sensitive data is sent to and from a Cassandra database.  When deployed in a Kubernetes environment, ensuring the data is secure and encrypted is a must.  Understanding data patterns and performance latencies across nodes becomes essential, as your Cassandra environment spans multiple datacenters and cloud vendors.

A service mesh provides service visibility, distributed tracing, and mTLS encryption.  

While it’s true Cassandra provides its own TLS encryption, one of the compelling features of Istio is the ability to uniformly administer mTLS for all of your services.  With a service mesh, you can set up an easy and consistent policy where Istio automatically manages the certificate rotation. Pulling Cassandra into a service mesh pairs capabilities of the two technologies in a way that makes running stateless services much easier.

In this blog, I’ll cover the steps necessary to configure Istio with mTLS enabled in a Kubernetes Cassandra environment.  We’ve collected some information from the Istio community, did some testing ourselves and pieced together a workable solution.  One of the benefits you get with Aspen Mesh is our Istio expertise from running Istio in production for the past 18 months.  We are tightly engaged with the Istio community and continually testing and working out the kinks of upstream Istio. We’re here to help you with your service mesh path to production!

Let’s consider how Cassandra operates.  To achieve continuous availability, Cassandra uses a “ring” communication approach.  Meaning each node communicates continually with the other existing nodes. For Cassandra’s node consensus, the nodes send metadata to several nodes through a service called a Gossip.  The receiving nodes then “gossip” to all the additional nodes. This Gossip protocol is similar to a TCP three-way handshake, and all of the metadata, like heartbeat state, node status, location, etc… is messaged across nodes via IP address:port.

In a Kubernetes deployment, Cassandra nodes are deployed as StatefulSets to ensure the allocated number of Cassandra nodes are available at all times. Persistent volumes are associated with the Cassandra StatefulSets, and a headless service is created to ensure a stable network ID.  This allows Kubernetes to restart a pod on another node and transfer its state seamlessly to the new node.

Now, here’s where it gets tricky.  When implementing an Istio service mesh with mTLS enabled, the Envoy sidecar intercepts all of the traffic from the Cassandra nodes, verifies where it’s coming from, decrypts and sends the payload to the Cassandra pod through an internal loopback address.   The Cassandra nodes are all listening on their Pod IPs for gossip. However, Envoy is forwarding only to 127.0.0.1, where Cassandra isn't listening. Let’s walk through how to solve this issue.

Setting up the Mesh:

We used the cassandra:v13 image from the Google repo for our Kubernetes Cassandra environment. There are a few things you’ll need to ensure are included in the Cassandra manifest at the time of deployment.  Within the Cassandra service, you'll need to set it to a headless service, or set clusterIP: None, and you have to allow some additional ports/port-names that Cassandra service will need to communicate with:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: cassandra
  namespace: cassandra
  name: cassandra
spec:
  clusterIP: None
  ports:
  - name: tcp-client
    port: 9042
  - port: 7000
    name: tcp-intra-node
  - port: 7001
    name: tcp-tls-intra-node
  - port: 7199
    name: tcp-jmx
  selector:
    app: cassandra

The next step is to tell each Cassandra node to listen to the Envoy loopback address.  

This image, by default, sets Cassandra’s listener to the Kubernetes Pod IP.  The listener address will need to be set to the localhost loopback address. This allows the Envoy sidecar to pass communication through to the Cassandra nodes.

To enable this you will need to change the config file for Cassandra or the cassandra.yaml.

We did this by adding a substitution to our Kubernetes Cassandra manifest based on the Istio bug:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: cassandra
  name: cassandra
  labels:
    app: cassandra
spec:
  serviceName: cassandra
  replicas: 3
  selector:
    matchLabels:
      app: cassandra
  template:
    metadata:
      labels:
        app: cassandra
    spec:
      terminationGracePeriodSeconds: 1800
      containers:
      - name: cassandra
        image: gcr.io/google-samples/cassandra:v13
        command: [ "/usr/bin/dumb-init", "/bin/bash", "-c", "sed -i 's/^CASSANDRA_LISTEN_ADDRESS=.*/CASSANDRA_LISTEN_ADDRESS=\"127.0.0.1\"/' /run.sh && /run.sh" ]
        imagePullPolicy: Always
        ports:
        - containerPort: 7000
          name: intra-node
        - containerPort: 7001
          name: tls-intra-node
        - containerPort: 7199
          name: jmx
        - containerPort: 9042

This simple change uses sed to patch the cassandra startup script to listen on localhost.  

If you're not using the google-samples/cassandra container you should modify your Cassandra config or container to set the listen_address to 127.0.0.1.  For some containers, this may already be the default.

You'll need to remove any ServiceEntry or VirtualService resources associated with the Cassandra deployment as no additional specified routing entries or rules are necessary.  Nothing external is needed to communicate, Cassandra is now inside the mesh and communication will simply pass through to each node.

Since the clusterIP is set to none for the Cassandra Service will be configured as a headless service (i.e. setting the clusterIP: None) a DestinationRule does not need to be added.  When there is no clusterIP assigned, Istio defines load balancing mode as PASSTHROUGH by default.

If you are using Aspen Mesh, the global meshpolicy has mTLS enabled by default, so no changes are necessary.

$ kubectl edit meshpolicy default -o yaml
apiVersion: authentication.istio.io/v1alpha1
kind: MeshPolicy
.
. #edited out
.
spec:
  peers:
  - mtls: {}

Finally, create a Cassandra namespace, enable automatic sidecar injection and deploy Cassandra.

$ kubectl create namespace cassandra
$ kubectl label namespace cassandra istio-injection=enabled
$ kubectl -n cassandra apply -f <Cassandra-manifest>.yaml

Here is the output that shows the Cassandra nodes running with Istio sidecars.

$ kubectl get pods -n cassandra                                                                                   
NAME                     READY     STATUS    RESTARTS   AGE
cassandra-0              2/2       Running   0          22m
cassandra-1              2/2       Running   0          21m
cassandra-2              2/2       Running   0          20m
cqlsh-5d648594cb-86rq9   2/2       Running   0          2h

Here is the output validating mTLS is enabled.

$ istioctl authn tls-check cassandra.cassandra.svc.cluster.local

 
HOST:PORT           STATUS     SERVER     CLIENT     AUTHN POLICY     DESTINATION RULE
cassandra...:7000       OK       mTLS       mTLS         default/ default/istio-system

Here is the output validating the Cassandra nodes are communicating with each other and able to establish load-balancing policies.

$ kubectl exec -it -n cassandra cassandra-0 -c cassandra -- nodetool status
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load     Tokens  Owns (effective)  Host ID            Rack
UN  100.96.1.225  129.92 KiB  32   71.8%       f65e8c93-85d7-4b8b-ae82-66f26b36d5fd Rack1-K8Demo
UN  100.96.3.51   157.68 KiB  32   55.4%       57679164-f95f-45f2-a0d6-856c62874620  Rack1-K8Demo
UN  100.96.4.59   142.07 KiB  32   72.8%       cc4d56c7-9931-4a9b-8d6a-d7db8c4ea67b  Rack1-K8Demo

If this is a solution that can make things easier in your environment, sign up for the free Aspen Mesh Beta.  It will guide you through an automated Istio installation, then you can install Cassandra using the manifest covered in this blog, which can be found here.