Virtual Host Routing with Logical DNS Names

Let me describe a common service mesh scenario...

You've deployed your application and it is happily consuming some external resources on the 'net. For example, say that reviews.default.svc.cluster.local is communicating with external service But you need to switch to a new external service You're using a service mesh, right? The light bulb goes on — how about we just redirect all traffic from to That certainly will work; add or modify a few resources and voila, traffic is re-routed and with zero downtime!

Only now there's a new problem — your system is looking less like the tidy cluster you started with and more like a bowl of spaghetti!

What if we used a neutral name for the database? How about db.default.svc.cluster.local? We might start with the same mechanism for re-routing traffic: from db.default.svc.cluster.local to Then when we needed to make the above change we just need to update the configuration to route traffic from db.default.svc.cluster.local to Done and again with zero downtime!

This is Virtual Host Routing to a Logical DNS Name. Virtual Host Routing is traditionally a server-side concept — a server responding to requests for one or more virtual servers. With a service mesh, it's fairly common to also apply this routing to the client side, redirecting traffic destined for one service to another service.

To give you a bit more context, a "logical name" is defined as a placeholder name that is mapped to a physical name when a request is made. An application might be configured to talk to its database at db.default.svc.cluster.local which is then mapped to in one cluster and in another.

Common practice is to use configuration to supply the DNS names to an application (add a DB_HOST environment variable set directly to or By setting the configuration to a physical server, it's harder to redirect the traffic later.

Best Practices

What are some best practices for working with external services? Processes like restricting outbound traffic and TLS origination can have a significant impact. The best practices listed below are not required, but this post is written assuming these practices are being followed.

Restricting Outbound Traffic

The outbound traffic policy determines if external services must be declared. A common setting for this policy is ALLOW_ANY — any application running in your cluster can communicate to any external service.  We recommend that the outbound traffic policy is set to REGISTRY_ONLY which requires that external services are defined explicitly. For security, the Aspen Mesh distribution of Istio has REGISTRY_ONLY by default.

If you are using an Istio distribution or if you want to explicitly set the outbound traffic policy, restrict outbound traffic by adding the following to your values file when deploying the istio chart:


TLS Origination

If an application communicates directly over HTTPS to upstream services, the service mesh can't inspect the traffic and it has no idea if requests are failing (it's all just encrypted traffic to the service mesh). The proxy is just routing bits. By having the proxy do "TLS origination", the service mesh sees both requests and responses and can even do some intelligent routing based on the content of the requests.

We'll use the rest of this blog to step through how to configure your application to communicate over just HTTP (change https://... configuration to just http://...).

How to Set Up Virtual Host Routing to a Logical DNS Name


A logical DNS name must still be resolvable. Otherwise the service mesh won't attempt to route traffic to it. In the yaml below, we are defining a DNS name of httpbin.default.svc.cluster.local so that we can route traffic to it.

apiVersion: v1
kind: Service
  name: httpbin
  - port: 443
    name: https
  - port: 80
    name: http


A service entry indicates that we have services running in our cluster that need to communicate to the outside Internet. The actual host (physical name) is listed ( in this example). Note that because we have the proxy doing TLS origination (just plain http between the application and the proxy), port 443 lists a protocol of HTTP (instead of HTTPS).

kind: ServiceEntry
  name: httpbin
  - number: 443
    name: http-port-for-tls-origination
    protocol: HTTP
  resolution: DNS
  location: MESH_EXTERNAL


A virtual service defines a set of rules to apply when traffic is routed to a specific host. In this example when traffic is routed to the /foo endpoint of httpbin.default.svc.cluster.local, the following rules are applied:

  1. Rewrite the URI from /foo to /get
  2. Rewrite the HOST header from httpbin.default.svc.cluster.local to
  3. Re-route the traffic to

Note that just re-routing the traffic is not sufficient for the server to handle our requests. The HOST header is how a server understands how to process a request.

kind: VirtualService
  name: httpbin
  - httpbin.default.svc.cluster.local
  - match:
    - uri:
        prefix: /foo
      uri: /get
    - destination:
          number: 443


A destination rule defines policies that are applied to traffic after routing has occurred. In this case we define policies for traffic going to port 443 of The above configuration is routing plain HTTP traffic to port 443. The following destination rule indicates that this traffic should be sent over HTTPS via TLS (the proxy will do TLS origination).

kind: DestinationRule
  name: httpbin
      simple: ROUND_ROBIN
    - port:
        number: 443
        mode: SIMPLE # initiates HTTPS when accessing

Testing with a simple pod

That's it! You can now deploy a service and configure it to talk to http://httpbin.default.svc.cluster.local/foo and traffic will get re-routed to Let's test it out...

1. Create a pod (just for testing; typically you use deployments to create and manage pods):

apiVersion: v1
kind: Pod
  name: test-pod
  - name: test-container
    image: pstauffer/curl
command: ["/bin/sleep", "3650d"]
$ kubectl apply -f pod.yaml

The above pod just sleeps for 10 years. Not very interesting by itself but it also provides the curl command that we can use for testing.

2. Curl the logical name:

$ kubectl exec -c test-container test-pod -it -- \
    curl -v http://httpbin.default.svc.cluster.local/foo

Here is the expected output (the response body was removed for brevity):

*   Trying
* Connected to httpbin.virtual-host-routing.svc.cluster.local ( port 80 (#0)
> GET /foo HTTP/1.1
> Host: httpbin.virtual-host-routing.svc.cluster.local
> User-Agent: curl/7.60.0
> Accept: */*
< HTTP/1.1 200 OK
< date: Mon, 23 Sep 2019 22:04:08 GMT
< content-type: application/json
< content-length: 916
< x-amzn-requestid: 1743ed99-df5b-41c2-aa46-9662e10be674
< cache-control: public, max-age=86400
< x-envoy-upstream-service-time: 219
< server: envoy
* Connection #0 to host httpbin.virtual-host-routing.svc.cluster.local left intact

And with that, you should be set!

Virtual Host Routing to a Logical DNS Name can be a useful tool, allowing a server to communicate with external services without needing to specify the physical DNS name of the external service. And a service mesh makes it easy, enhancing your capabilities and keeping things rational (no offense to spaghetti lovers!).

If you enjoyed learning about (and trying out!) this topic, subscribe to our blog to get updates when new articles are posted.

Inline yaml Editing with yq

So you're working hard at building a solid Kubernetes cluster — maybe using kops to create a new instance group and BAM you are presented with an editor session to edit the details of that shiny new instance group. No biggie; you just need to add a simple little detailedInstanceMonitoring: true to the spec and you are good to go.

Okay, now you need to do this several times a day to test the performance of the latest build and this is just one of several steps to get the cluster up and running. You want to automate building that cluster as much as possible but every time you get to the step to create that instance group, BAM there it is again — your favorite editor, you have to add that same line every time.

Standard practice is to use cluster templating but there are times when you need something more lightweight. Enter yq.

yq is great for digging through yaml files but it also has an in-place merge function that can modify a file directly just like any editor. And kops, along with several other command line tools honor the EDITOR environment variable so you can automate your yaml editing along with the rest of your cluster handy work.

Making it work

The first roadblock is that you can pass command line options via the EDITOR environment variable but the file being edited in-place must be the last option (actually passed to the editor by kops as it invokes the editor). yq wants you to pass it the file to be edited followed by a patch file with instructions on editing the file (more on that below). To get around this issue I use a little bash script to invoke yq and reorder the last two command line options like so (I'll call the file

#!/usr/bin/env bash

if [[ $# != 2 ]]; then
    echo "Usage: $0 <merge file (supplied by script)> <file being edited (supplied by invoker of EDITOR)>"
    exit 1

yq merge --inplace --overwrite $2 $1

In the above script, the merge option tells yq we want to merge yaml files and --inplace says to edit the first file in-place. The --overwrite option instructs yq to overwrite existing sections of the file if they are defined in the merge file. $2 is the file to be edited and $1 is the merge file (the opposite order of what the script gets them in). There are other useful options available documented in the yq merge documentation.

Example 1: Turning on detailed instance monitoring

The next step is to create a patch file containing the edit you want to perform. In this example, we will turn on detailed instance monitoring which is a useful way to get more metrics from your nodes. Here's the merge file (we will call this file ig-monitoring.yaml):

  detailedInstanceMonitoring: true

To put it all together, you can invoke kops with a custom editor command:

EDITOR="./ ./ig-monitoring.yaml" kops edit instancegroups nodes

That's it! kops creates a temporary file and invokes your editor script which invokes yq. yq edits the temporary file in-place and kops takes the edited output and moves on.

Example 2: Temporarily add nodes

Say you want to temporarily add capacity to your cluster while performing some maintenance. This is a temporary change, so there's no need to update your cluster's configuration permanently. The following patch file will update the min and max node counts in an instance group:

  maxSize: 25
  minSize: 25

Then invoke the same script from above followed by a kops update:

EDITOR="./ ig-nodes-25.yaml" kops edit instancegroups nodes
kops update cluster $NAME --yes

These tips should make it easier to build lots of happy clusters!