Let me describe a common service mesh scenario…
You’ve deployed your application and it is happily consuming some external resources on the ‘net. For example, say that reviews.default.svc.cluster.local
is communicating with external service redis-12.eu-n-3.example.com
. But you need to switch to a new external service redis-db-4.eu-n-1.example.com
. You’re using a service mesh, right? The light bulb goes on — how about we just redirect all traffic from redis-12.eu-n-3.example.com
to redis-db-4.eu-n-1.example.com
? That certainly will work; add or modify a few resources and voila, traffic is re-routed and with zero downtime!
Only now there’s a new problem — your system is looking less like the tidy cluster you started with and more like a bowl of spaghetti!
What if we used a neutral name for the database? How about db.default.svc.cluster.local
? We might start with the same mechanism for re-routing traffic: from db.default.svc.cluster.local
to redis-12.eu-n-3.example.com
. Then when we needed to make the above change we just need to update the configuration to route traffic from db.default.svc.cluster.local
to redis-db-4.eu-n-1.example.com
. Done and again with zero downtime!
This is Virtual Host Routing to a Logical DNS Name. Virtual Host Routing is traditionally a server-side concept — a server responding to requests for one or more virtual servers. With a service mesh, it’s fairly common to also apply this routing to the client side, redirecting traffic destined for one service to another service.
To give you a bit more context, a “logical name” is defined as a placeholder name that is mapped to a physical name when a request is made. An application might be configured to talk to its database at db.default.svc.cluster.local
which is then mapped to redis-12.eu-n-3.example.com
in one cluster and redis-db-4.eu-n-1.example.com
in another.
Common practice is to use configuration to supply the DNS names to an application (add a DB_HOST
environment variable set directly to redis-12.eu-n-3.example.com
or redis-db-4.eu-n-1.example.com
). By setting the configuration to a physical server, it’s harder to redirect the traffic later.
Best Practices
What are some best practices for working with external services? Processes like restricting outbound traffic and TLS origination can have a significant impact. The best practices listed below are not required, but this post is written assuming these practices are being followed.
Restricting Outbound Traffic
The outbound traffic policy determines if external services must be declared. A common setting for this policy is ALLOW_ANY
— any application running in your cluster can communicate to any external service. We recommend that the outbound traffic policy is set to REGISTRY_ONLY
which requires that external services are defined explicitly. For security, the Aspen Mesh distribution of Istio has REGISTRY_ONLY
by default.
If you are using an Istio distribution or if you want to explicitly set the outbound traffic policy, restrict outbound traffic by adding the following to your values file when deploying the istio
chart:
global:
outboundTrafficPolicy:
mode: REGISTRY_ONLY
TLS Origination
If an application communicates directly over HTTPS to upstream services, the service mesh can’t inspect the traffic and it has no idea if requests are failing (it’s all just encrypted traffic to the service mesh). The proxy is just routing bits. By having the proxy do “TLS origination”, the service mesh sees both requests and responses and can even do some intelligent routing based on the content of the requests.
We’ll use the rest of this blog to step through how to configure your application to communicate over just HTTP (change https://
… configuration to just http://
…).
How to Set Up Virtual Host Routing to a Logical DNS Name
Service
A logical DNS name must still be resolvable. Otherwise the service mesh won’t attempt to route traffic to it. In the yaml below, we are defining a DNS name of httpbin.default.svc.cluster.local
so that we can route traffic to it.
apiVersion: v1
kind: Service
metadata:
name: httpbin
spec:
ports:
- port: 443
name: https
- port: 80
name: http
ServiceEntry
A service entry indicates that we have services running in our cluster that need to communicate to the outside Internet. The actual host (physical name) is listed (httpbin.org
in this example). Note that because we have the proxy doing TLS origination (just plain http between the application and the proxy), port 443
lists a protocol of HTTP
(instead of HTTPS
).
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: httpbin
spec:
hosts:
- httpbin.org
ports:
- number: 443
name: http-port-for-tls-origination
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
VirtualService
A virtual service defines a set of rules to apply when traffic is routed to a specific host. In this example when traffic is routed to the /foo
endpoint of httpbin.default.svc.cluster.local
, the following rules are applied:
- Rewrite the URI from
/foo
to/get
- Rewrite the HOST header from
httpbin.default.svc.cluster.local
tohttpbin.org
- Re-route the traffic to
httpbin.org
Note that just re-routing the traffic is not sufficient for the server to handle our requests. The HOST
header is how a server understands how to process a request.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin.default.svc.cluster.local
http:
- match:
- uri:
prefix: /foo
rewrite:
uri: /get
authority: httpbin.org
route:
- destination:
host: httpbin.org
port:
number: 443
DestinationRule
A destination rule defines policies that are applied to traffic after routing has occurred. In this case we define policies for traffic going to port 443
of httpbin.org
. The above configuration is routing plain HTTP traffic to port 443
. The following destination rule indicates that this traffic should be sent over HTTPS via TLS (the proxy will do TLS origination).
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin.org
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: SIMPLE # initiates HTTPS when accessing httpbin.org
Testing with a simple pod
That’s it! You can now deploy a service and configure it to talk to http://httpbin.default.svc.cluster.local/foo
and traffic will get re-routed to https://httpbin.org/get. Let’s test it out…
1. Create a pod (just for testing; typically you use deployments to create and manage pods):
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: pstauffer/curl
command: ["/bin/sleep", "3650d"]
$ kubectl apply -f pod.yaml
The above pod just sleeps for 10 years. Not very interesting by itself but it also provides the curl
command that we can use for testing.
2. Curl the logical name:
$ kubectl exec -c test-container test-pod -it -- \
curl -v http://httpbin.default.svc.cluster.local/foo
Here is the expected output (the response body was removed for brevity):
* Trying 100.66.72.128...
* TCP_NODELAY set
* Connected to httpbin.virtual-host-routing.svc.cluster.local (100.66.72.128) port 80 (#0)
> GET /foo HTTP/1.1
> Host: httpbin.virtual-host-routing.svc.cluster.local
> User-Agent: curl/7.60.0
> Accept: */*
>
< HTTP/1.1 200 OK
< date: Mon, 23 Sep 2019 22:04:08 GMT
< content-type: application/json
< content-length: 916
< x-amzn-requestid: 1743ed99-df5b-41c2-aa46-9662e10be674
< cache-control: public, max-age=86400
< x-envoy-upstream-service-time: 219
< server: envoy
<
* Connection #0 to host httpbin.virtual-host-routing.svc.cluster.local left intact
...
And with that, you should be set!
Virtual Host Routing to a Logical DNS Name can be a useful tool, allowing a server to communicate with external services without needing to specify the physical DNS name of the external service. And a service mesh makes it easy, enhancing your capabilities and keeping things rational (no offense to spaghetti lovers!).
If you enjoyed learning about (and trying out!) this topic, subscribe to our blog to get updates when new articles are posted.