Understanding Istio Ingress Gateway in Kubernetes

Understanding Istio Ingress Gateway in Kubernetes

Traditionally, Kubernetes has used an Ingress controller to handle the traffic that enters the cluster from the outside. When using Istio, this is no longer the case. Istio has replaced the familiar Ingress resource with new Gateway and VirtualServices resources. They work in tandem to route the traffic into the mesh. Inside the mesh there is no need for Gateways since the services can access each other by a cluster local service name.

So how does it work? How does a request reach the application it wants? It is more complicated than one would think. Here is a drawing and a quick overview.

  1. A client makes a request on a specific port.
  2. The Load Balancer listens on this port and forwards the request to one of the workers in the cluster (on the same or a new port).
  3. Inside the cluster the request is routed to the Istio IngressGateway Service which is listening on the port the load balancer forwards to.
  4. The Service forwards the request (on the same or a new port) to an Istio IngressGateway Pod (managed by a Deployment).
  5. The IngressGateway Pod is configured by a Gateway (!) and a VirtualService.
  6. The Gateway configures the ports, protocol, and certificates.
  7. The VirtualService configures routing information to find the correct Service
  8. The Istio IngressGateway Pod routes the request to the application Service.
  9. And finally, the application Service routes the request to an application Pod (managed by a deployment).

The Load Balancer

The load balancer can be configured manually or automatically through the service type: LoadBalancer. In this case, since not all clouds support automatic configuration, I'm assuming that the load balancer is configured manually to forward traffic to a port that the IngressGateway Service is listening on. Manual load balancers don't communicate with the cluster to find out where the backing pods are running, and we must expose the Service with type: NodePort and they are only available on high ports, 30000-32767. Our LB is listening on the following ports.

  • HTTP – Port 80, forwards traffic to port 30080.
  • HTTPS – Port 443, forwards traffic to port 30443.
  • MySQL – Port 3306, forwards traffic to port 30306.

Make sure your load balancer configuration forwards to all your worker nodes. This will ensure that the traffic gets forwarded even if some nodes are down.

The IngressGateway Service

The IngressGateway Service must listen to all the above ports to be able to forward the traffic to the IngressGateway pods. We use the routing to bring the port numbers back to their default numbers.

Please note that a Kubernetes Service is not a "real" service, but, since we are using type: NodePort, the request will be handled by the kube-proxy provided by Kubernetes and forwarded to a node with a running pod. Once on the node, an IP-tables configuration will forward the request to the appropriate pod.

If you inspect the service, you will see that it defines more ports than I have describe above. These ports are used for internal Istio communication.

The IngressGateway Deployment

Now we have reached the most interesting part in this flow, the IngressGateway. This is a fancy wrapper around the Envoy proxy and it is configured in the same way as the sidecars used inside the service mesh (it is actually the same container). When we create or change a Gateway or VirtualService, the changes are detected by the Istio Pilot controller which converts this information to an Envoy configuration and sends it to the relevant proxies, including the Envoy inside the IngressGateway.

Don't confuse the IngressGateway with the Gateway resource. The Gateway resource is used to configure the IngressGateway

Since container ports don't have to be declared in Kubernetes pods or deployments, we don't have to declare the ports in the IngressGateway Deployment. But, if you look inside the deployment you can see that there are a number of ports declared anyway (unnecessarily).

What we do have to care about in the IngressGateway Deployment is SSL certificates. To be able to access the certificates inside the Gateway resources, make sure that you have mounted the certificates properly.

The Gateway

The Gateway resources are used to configure the ports for Envoy. Since we have exposed three ports with the service, we need these ports to be handled by Envoy. We can do this by declaring one or more Gateways. In my example, I'm going to use a single Gateway, but it may be split into two or three.

Valid ports are, HTTP|HTTPS|GRPC|HTTP2|MONGO|TCP|TLS. More info about Gateways can be found in the Istio Gateway docs

The VirtualService

Our final interesting resource is the VirtualService, it works in concert with the Gateway to configure Envoy. If you only add a Gateway nothing will show up in the Envoy configuration, and the same is true if you only add a VirtualService.

VirtualServices are really powerful and they enable the intelligent routing that is one of the very reasons we want to use Istio in the first place. However, I'm not going into it in this article since it is about the basic networking and not the fancy stuff.

Here's a basic configuration for an HTTP(s) service.

Now, when we have added both a Gateway and a VirtualService, the routes have been created in the Envoy configuration. To see this, you can kubectl port-forward istio-ingressgateway-xxxx-yyyy 15000 and check out the configuration by browsing to http://localhost:15000/config_dump.

Note that the gateway specified as well as the host must match the information in the Gateway. If it doesn’t the entry will not show up in the configuration.

Here's a basic configuration for a TCP service.

This will result in a completely different configuration in the Envoy config.

Application Service and Deployment

Our request have now reached the application service and deployment. These are just normal Kubernetes resources and I will assume that if you have read this far, you already know all about it. :)


Debugging networking issues can be difficult at times, so here are some aliases that I find useful.

When you have started the port-forwarding to the istio-ingressgateway, with igpf, here are some more things you can do.


Networking with Kubernetes and Istio is far from trivial, hopefully this article has shed some light on how it works. Here are some key takeaways.

To Add a New Port to the IngressGateway

  • Add the port to an existing Gateway or configure a new. If it's a TCP service also add the port to the VirtualService, not needed for HTTP since it matches on layer 7 (domain name, etc.).
  • Add the port to the ingressgateway service. If you are using service type: LoadBalancer, you are done.
  • Otherwise, open the port in the load balancer and forward traffic to all worker nodes.

To Add Certificates to an SSL Service

  • Add the TLS secrets to the cluster.
  • Mount the secret volumes in the ingressgateway.
  • Configure the Gateway to use the newly created secrets.

This Post Has 7 Comments

  1. Thank you for the excellent post. I am confused about one part however – I see in your VirtualService you reference the associated gateway by it’s Kubernetes Service name i.e. default-gateway.istio-system.svc.cluster.local however in the Istio docs such as the page on Gateways you reference they instead use the metadata.name of the associated Gateway resources. Does istio support both and if so why did you opt for the svc name ? I ask as I have trouble successfully configuring an ingress following the istio docs themselves.

    1. The metadata.name, default-gateway, is the short form of the kubernetes name.
      default-gateway.istio-system.svc.cluster.local is the Fully Qualified Domain Name.
      The reason I’m using the fully qualified name is that I want to be able to refer to the Gateway from different namespaces.
      If your service is in the same namespace the short name should work.

  2. Thank you for this blogpost! Best read in a while, keep up the good work! You rock! :)

  3. So who does all these configs..the developers or some networking doode?
    I got into microservices with spring and Netflix libs like Zuul and eureka ..all those setting up to talk to services etc was done by the developer. Now I feel like ol school in a matter of months with the arrival of Istio. Do you know of any place that maps old tech with the new replacements.. something that cld rewire my brain.. I got some idea watching a vid by defog tech about service mesh.. but wld like to see a full blown example of how to microservice with this new tech..any links? Also the roles.. a java developer cares two hoots about all this networking mumbo jumbo.. I am thinking at least shld be aware of not just his broken down services but also the other deployments alongwith
    (Pardon me if I sound incoherent.. it is just too many confusing stacks for microservices)

    1. It depends. Most of the time the developers should only have to worry about the VirtualServices and handle retries, circuit-breaking and timeouts. But, if you are responsible for the whole platform, you will need to know it all.
      On the other hand if you are using a platform, such as something built on Knative someone else may manage everything for you.
      The good thing about istio is that it moves it out of the language specific libraries and let’s us focus on developing the core of our applications.

      1. Thanks….
        Looks like there are too many similar things and things are still being tried out and hence the jury for a best of the breed is still out.
        I am forced to use azure. Some microservices will b in c# and some wld be in java. So need to gen up on how to develop and deploy my microservices to Azure with Istio
        I saw a springone session about spring and Istio.. seems like the springframework guys pivotal are going to ditch spring with Netflix style microservices and adopt spring with Istio

  4. you could add virtualservice without gateway and it will update envoy proxy.

Leave a Reply

Close Menu