Traditionally, Kubernetes has used an Ingress
controller to handle the traffic that enters the cluster from the outside. When using Istio, this is no longer the case. Istio has replaced the familiar Ingress
resource with new Gateway
and VirtualServices
resources. They work in tandem to route the traffic into the mesh. Inside the mesh there is no need for Gateway
s since the services can access each other by a cluster local service name.
So how does it work? How does a request reach the application it wants? It is more complicated than one would think. Here is a drawing and a quick overview.
- A client makes a request on a specific port.
- The
Load Balancer
listens on this port and forwards the request to one of the workers in the cluster (on the same or a new port). - Inside the cluster the request is routed to the
Istio IngressGateway Service
which is listening on the port the load balancer forwards to. - The
Service
forwards the request (on the same or a new port) to anIstio IngressGateway Pod
(managed by a Deployment). - The
IngressGateway Pod
is configured by aGateway
(!) and aVirtualService
. - The
Gateway
configures the ports, protocol, and certificates. - The
VirtualService
configures routing information to find the correctService
- The
Istio IngressGateway Pod
routes the request to theapplication Service
. - And finally, the
application Service
routes the request to anapplication Pod
(managed by a deployment).
The Load Balancer
The load balancer can be configured manually or automatically through the service type: LoadBalancer
. In this case, since not all clouds support automatic configuration, I'm assuming that the load balancer is configured manually to forward traffic to a port that the IngressGateway Service
is listening on. Manual load balancers don't communicate with the cluster to find out where the backing pods are running, and we must expose the Service with type: NodePort
and they are only available on high ports, 30000-32767. Our LB is listening on the following ports.
- HTTP – Port 80, forwards traffic to port 30080.
- HTTPS – Port 443, forwards traffic to port 30443.
- MySQL – Port 3306, forwards traffic to port 30306.
Make sure your load balancer configuration forwards to all your worker nodes. This will ensure that the traffic gets forwarded even if some nodes are down.
The IngressGateway Service
The IngressGateway Service
must listen to all the above ports to be able to forward the traffic to the IngressGateway pods
. We use the routing to bring the port numbers back to their default numbers.
Please note that a Kubernetes Service is not a "real" service, but, since we are using type: NodePort
, the request will be handled by the kube-proxy
provided by Kubernetes and forwarded to a node with a running pod. Once on the node, an IP-tables configuration will forward the request to the appropriate pod.
# From the istio-ingressgateway service ports: - name: http2 nodePort: 30000 port: 80 protocol: TCP - name: https nodePort: 30443 port: 443 protocol: TCP - name: mysql nodePort: 30306 port: 3306 protocol: TCP
If you inspect the service, you will see that it defines more ports than I have describe above. These ports are used for internal Istio communication.
The IngressGateway Deployment
Now we have reached the most interesting part in this flow, the IngressGateway
. This is a fancy wrapper around the Envoy proxy and it is configured in the same way as the sidecars used inside the service mesh (it is actually the same container). When we create or change a Gateway
or VirtualService
, the changes are detected by the Istio Pilot
controller which converts this information to an Envoy configuration and sends it to the relevant proxies, including the Envoy inside the IngressGateway
.
Don't confuse the IngressGateway
with the Gateway
resource. The Gateway
resource is used to configure the IngressGateway
Since container ports don't have to be declared in Kubernetes pods or deployments, we don't have to declare the ports in the IngressGateway Deployment
. But, if you look inside the deployment you can see that there are a number of ports declared anyway (unnecessarily).
What we do have to care about in the IngressGateway Deployment
is SSL certificates. To be able to access the certificates inside the Gateway
resources, make sure that you have mounted the certificates properly.
# Example certificate volume mounts volumeMounts: - mountPath: /etc/istio/ingressgateway-certs name: ingressgateway-certs readOnly: true - mountPath: /etc/istio/ingressgateway-ca-certs name: ingressgateway-ca-certs readOnly: true # Example certificate volumes volumes: - name: ingressgateway-certs secret: defaultMode: 420 optional: true secretName: istio-ingressgateway-certs - name: ingressgateway-ca-certs secret: defaultMode: 420 optional: true secretName: istio-ingressgateway-ca-certs
The Gateway
The Gateway
resources are used to configure the ports for Envoy. Since we have exposed three ports with the service, we need these ports to be handled by Envoy. We can do this by declaring one or more Gateway
s. In my example, I'm going to use a single Gateway
, but it may be split into two or three.
apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: default-gateway namespace: istio-system spec: selector: istio: ingressgateway servers: - hosts: - '*' port: name: http number: 80 protocol: HTTP - hosts: - '*' port: name: https number: 443 protocol: HTTPS tls: mode: SIMPLE privateKey: /etc/istio/ingressgateway-certs/tls.key serverCertificate: /etc/istio/ingressgateway-certs/tls.crt - hosts: # For TCP routing this fields seems to be ignored, but it is matched - '*' # with the VirtualService, I use * since it will match anything. port: name: mysql number: 3306 protocol: TCP
Valid ports are, HTTP|HTTPS|GRPC|HTTP2|MONGO|TCP|TLS
. More info about Gateways can be found in the Istio Gateway docs
The VirtualService
Our final interesting resource is the VirtualService
, it works in concert with the Gateway to configure Envoy. If you only add a Gateway
nothing will show up in the Envoy configuration, and the same is true if you only add a VirtualService
.
VirtualService
s are really powerful and they enable the intelligent routing that is one of the very reasons we want to use Istio in the first place. However, I'm not going into it in this article since it is about the basic networking and not the fancy stuff.
Here's a basic configuration for an HTTP(s) service.
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: counter spec: gateways: - default-gateway.istio-system.svc.cluster.local hosts: - counter.lab.example.com http: - match: - uri: prefix: / route: - destination: host: counter port: number: 80
Now, when we have added both a Gateway
and a VirtualService
, the routes have been created in the Envoy configuration. To see this, you can kubectl port-forward istio-ingressgateway-xxxx-yyyy 15000
and check out the configuration by browsing to http://localhost:15000/config_dump.
Note that the gateway specified as well as the host must match the information in the Gateway
. If it doesn’t the entry will not show up in the configuration.
// Example of http route in Envoy config { name: "counter:80", domains: [ "counter.lab.example.com" ], routes: [ { match: { prefix: "/" }, route: { cluster: "outbound|80||counter.default.svc.cluster.local", timeout: "0s", max_grpc_timeout: "0s" }, ...
Here's a basic configuration for a TCP service.
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: mysql spec: gateways: - default-gateway.istio-system.svc.cluster.local hosts: # The host fields seems to only be used to match the Gateway. - '*' # I'm using '*', the listener created is listing on 0.0.0.0 tcp: - match: - port: 3306 route: - destination: host: mysql.default.svc.cluster.local port: number: 3306
This will result in a completely different configuration in the Envoy config.
listener: { name: "0.0.0.0_3306", address: { socket_address: { address: "0.0.0.0", port_value: 3306 } },
Application Service and Deployment
Our request have now reached the application service and deployment. These are just normal Kubernetes resources and I will assume that if you have read this far, you already know all about it. :)
Debugging
Debugging networking issues can be difficult at times, so here are some aliases that I find useful.
# Port forward to the first istio-ingressgateway pod alias igpf='kubectl -n istio-system port-forward $(kubectl -n istio-system get pods -listio=ingressgateway -o=jsonpath="{.items[0].metadata.name}") 15000' # Get the http routes from the port-forwarded ingressgateway pod (requires jq) alias iroutes='curl --silent http://localhost:15000/config_dump | jq '\''.configs.routes.dynamic_route_configs[].route_config.virtual_hosts[]| {name: .name, domains: .domains, route: .routes[].match.prefix}'\''' # Get the logs of the first istio-ingressgateway pod # Shows what happens with incoming requests and possible errors alias igl='kubectl -n istio-system logs $(kubectl -n istio-system get pods -listio=ingressgateway -o=jsonpath="{.items[0].metadata.name}") --tail=300' # Get the logs of the first istio-pilot pod # Shows issues with configurations or connecting to the Envoy proxies alias ipl='kubectl -n istio-system logs $(kubectl -n istio-system get pods -listio=pilot -o=jsonpath="{.items[0].metadata.name}") discovery --tail=300'
When you have started the port-forwarding to the istio-ingressgateway
, with igpf
, here are some more things you can do.
- To see the Envoy listeners, browse to http://localhost:15000/listeners.
- To turn on more verbose logging, browse to http://localhost:15000/logging.
- More information can be found at the root, http://localhost:15000/.
Conclusion
Networking with Kubernetes and Istio is far from trivial, hopefully this article has shed some light on how it works. Here are some key takeaways.
To Add a New Port to the IngressGateway
- Add the port to an existing
Gateway
or configure a new. If it's a TCP service also add the port to theVirtualService
, not needed for HTTP since it matches on layer 7 (domain name, etc.). - Add the port to the
ingressgateway service
. If you are using servicetype: LoadBalancer
, you are done. - Otherwise, open the port in the load balancer and forward traffic to all worker nodes.
To Add Certificates to an SSL Service
- Add the TLS secrets to the cluster.
- Mount the secret volumes in the
ingressgateway
. - Configure the
Gateway
to use the newly created secrets.
Thank you for the excellent post. I am confused about one part however – I see in your VirtualService you reference the associated gateway by it’s Kubernetes Service name i.e. default-gateway.istio-system.svc.cluster.local however in the Istio docs such as the page on Gateways you reference they instead use the metadata.name of the associated Gateway resources. Does istio support both and if so why did you opt for the svc name ? I ask as I have trouble successfully configuring an ingress following the istio docs themselves.
The metadata.name, default-gateway, is the short form of the kubernetes name.
default-gateway.istio-system.svc.cluster.local is the Fully Qualified Domain Name.
The reason I’m using the fully qualified name is that I want to be able to refer to the Gateway from different namespaces.
If your service is in the same namespace the short name should work.
Thank you for this blogpost! Best read in a while, keep up the good work! You rock! :)
So who does all these configs..the developers or some networking doode?
I got into microservices with spring and Netflix libs like Zuul and eureka ..all those setting up to talk to services etc was done by the developer. Now I feel like ol school in a matter of months with the arrival of Istio. Do you know of any place that maps old tech with the new replacements.. something that cld rewire my brain.. I got some idea watching a vid by defog tech about service mesh.. but wld like to see a full blown example of how to microservice with this new tech..any links? Also the roles.. a java developer cares two hoots about all this networking mumbo jumbo.. I am thinking at least shld be aware of not just his broken down services but also the other deployments alongwith
(Pardon me if I sound incoherent.. it is just too many confusing stacks for microservices)
It depends. Most of the time the developers should only have to worry about the VirtualServices and handle retries, circuit-breaking and timeouts. But, if you are responsible for the whole platform, you will need to know it all.
On the other hand if you are using a platform, such as something built on Knative someone else may manage everything for you.
The good thing about istio is that it moves it out of the language specific libraries and let’s us focus on developing the core of our applications.
Thanks….
Looks like there are too many similar things and things are still being tried out and hence the jury for a best of the breed is still out.
I am forced to use azure. Some microservices will b in c# and some wld be in java. So need to gen up on how to develop and deploy my microservices to Azure with Istio
I saw a springone session about spring and Istio.. seems like the springframework guys pivotal are going to ditch spring with Netflix style microservices and adopt spring with Istio
you could add virtualservice without gateway and it will update envoy proxy.
Pingback: itemprop="name">理解 Kubernetes 中 Istio 的 Ingress Gateway – 大叔爱吃扯面
Nice Post !!
Pingback: itemprop="name">Machine-learning clusters in Azure hijacked to mine cryptocurrency
Pingback: itemprop="name">De aprendizaje de la máquina clusters en Azure secuestrado a la mía cryptocurrency - Actual Tecnología
Pingback: itemprop="name">Machine-learning clusters in Azure hijacked to mine cryptocurrency – HostingMoto
finally, in multi ingress/egress environment, I did not get that is it mandatory, to have gateway workload in same namespace that gateway is defined?
Pingback: itemprop="name">Machine-learning clusters in Azure hijacked to mine cryptocurrency • Blockcast.cc- Blockchain, DLT, Crypto News
Came across this post today! Good explanation this is. I have a question though. What would happen if we don’t add certificates to ingress gateway deployment but add it to Gateway object? I dont understand need to have certs on ingress gateway deployment. Could you explain in your amazingly simple to understand language!?