Kubernetes Gateway API setup with Cilium Load Balancing

This article will guide you through the process of configuring Kubernetes Gateway API resources while leveraging Cilium's advanced load balancing capabilities.

Series - Gateway API

This post is a part of the Gateway API series on Kubito. Make sure you check out the other posts in this series in the menu above.

Following the guidelines from our previous discussion on setting up Gateway API resources and configuring Cilium, this segment focuses on employing these resources and Cilium’s load balancing features to make Kubernetes applications accessible externally.

In Kubernetes clusters using Cilium, the configuration of custom resources like CiliumLoadBalancerIPPool and CiliumL2AnnouncementPolicy plays a pivotal role in the effective management of network services. Here’s how they contribute:

  • CiliumLoadBalancerIPPool: This custom resource is crucial for managing the allocation of external IPs to Load Balancer type services within the cluster. By configuring CiliumLoadBalancerIPPool, you enable these services to obtain external IP addresses necessary for their operation. This setup is essential for services that need to be accessible from outside the cluster.

  • CiliumL2AnnouncementPolicy: The CiliumL2AnnouncementPolicy is instrumental in controlling Layer 2 network announcements. Proper configuration of this resource ensures that the network infrastructure is aware of the presence of the Load Balancer services, facilitating their integration into the existing network.

In the context of the Gateway API:

  • The Gateway API introduces a more advanced and flexible approach to managing ingress traffic in Kubernetes through the Gateway custom resource.

  • When a Gateway resource is created, it automatically initiates a managed Load Balancer service. This Load Balancer service, in turn, requires an IP address to function effectively.

  • The configuration of Cilium, specifically through resources like CiliumLoadBalancerIPPool, becomes integral here. It ensures that the Load Balancer service, spawned by the Gateway API, receives an appropriate external IP address.

In summary, the configurations of CiliumLoadBalancerIPPool and CiliumL2AnnouncementPolicy enable seamless and efficient operation of Load Balancer type services in Kubernetes clusters. These configurations, in conjunction with the Gateway API, establish a robust and dynamic network infrastructure that is adaptable to various service requirements.

Here is an example configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
  name: example-ip-pool
  namespace: kube-system
spec:
  cidrs:
  - cidr: "10.0.1.250/30"
  serviceSelector:
    matchLabels:
      "io.kubernetes.service.namespace": "common"

---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumL2AnnouncementPolicy
metadata:
  name: example-l2advertisement-policy
  namespace: kube-system
spec:
  serviceSelector:
    matchLabels:
      io.cilium.gateway/owning-gateway: example
  interfaces:
    - ^enx+ # host interface regex
  externalIPs: true
  loadBalancerIPs: true

Key aspects to highlight in this scenario include the restriction of the CIDR range to only four IP addresses. These addresses are exclusively designated for Load Balancer services located within the common namespace, where the Gateway resource resides. Additionally, we’re implementing a host interface regex. This regex is crucial as it enables Cilium to accurately route traffic to the physical host interface on the nodes.

The Gateway API represents a modern innovation in the Kubernetes ecosystem, aiming to unify the routing configurations for Kubernetes services. Its primary goal is to supplant the Ingress resource and its various implementations across different Ingress Controllers.

A significant advantage of adopting the Gateway API is its standardization. This uniformity ensures that even if there’s a shift away from Cilium in the future, the routing configurations remain consistent and adaptable across different implementations.

In this setup, where the Gateway Controller is administered by Cilium, the process is streamlined. To initiate, you simply create a Gateway resource. Once this resource is in place, Cilium’s Gateway Controller takes over, automatically generating its corresponding Load Balancer service and assigning an external IP to it. This automation simplifies the deployment process and ensures efficient resource management.

Key elements to emphasize in this setup include:

  • Distinct Listeners: We have established separate listeners for our .link and .io domains. This segregation is maintained for both HTTP and HTTPS protocols, ensuring tailored handling for each domain.

  • SSL Termination: SSL termination is conducted directly at our Gateway. This approach allows for centralized management and control of secure communications.

  • Certificate Management: Our SSL setup utilizes LetsEncrypt certificates, which are managed effectively by our cert-manager deployment. The integration of cert-manager in this architecture plays a pivotal role in automating and simplifying certificate handling. You can learn more about the cert-manager integration in the following posts in this series.

  • HTTPRoute Configurations: We have configured the Gateway to recognize HTTPRoute configurations from all namespaces. This is a strategic decision, as it aligns with our deployment approach, where these configurations are set up in different namespaces corresponding to the services they route.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: example
  namespace: common
spec:
  gatewayClassName: cilium
  addresses:
    - value: "10.0.1.250"
  listeners:
    # HTTP Listener for example.link
    - name: http-link
      port: 80
      protocol: HTTP
      hostname: "*.example.link"
      allowedRoutes:
        namespaces:
          from: All

    # HTTPS Listener for example.link
    - name: https-link
      port: 443
      protocol: HTTPS
      hostname: "*.example.link"
      tls:
        mode: Terminate
        certificateRefs:
        - kind: Secret
          name: wildcard-example-link
      allowedRoutes:
        namespaces:
          from: All

    # HTTP Listener for example.io
    - name: http-io
      port: 80
      protocol: HTTP
      hostname: "*.example.io"
      allowedRoutes:
        namespaces:
          from: All

    # HTTPS Listener for example.io
    - name: https-io
      port: 443
      protocol: HTTPS
      hostname: "*.example.io"
      tls:
        mode: Terminate
        certificateRefs:
        - kind: Secret
          name: wildcard-example-io
      allowedRoutes:
        namespaces:
          from: All

Upon creation, the HTTPRoute resources are associated with the Gateway. These resources are configured with specific matching rules and hostnames, tailored to the services we aim to make accessible from outside the cluster.

Key aspects of this configuration include:

  • HTTP to HTTPS Redirection: We manage the redirection from HTTP to HTTPS directly within these routes. This redirection is applied to both http-link and http-io listeners, as indicated in the sectionName of the HTTPRoutes.

  • Use of Wildcards: Wildcards are employed in the hostnames to ensure that the configuration encompasses not just the main domains but also any subdomains. This approach broadens the scope of our routing rules, making them more versatile.

  • Gateway Attachment: The HTTPRoutes are linked to the Gateway through the parentRefs section. In this section, we specify both the name and the namespace of the Gateway, which facilitates the integration of these routes with the Gateway’s routing mechanisms.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# HTTP Redirection Route (example.link)
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: http-link-redirect
  namespace: common
spec:
  parentRefs:
  - name: example
    namespace: common
    sectionName: http-link
  hostnames:
  - "*.example.link"
  rules:
  - filters:
    - type: RequestRedirect
      requestRedirect:
        scheme: https
        statusCode: 301
        port: 443

---
# HTTP Redirection Route (example.io)
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: http-io-redirect
  namespace: common
spec:
  parentRefs:
  - name: example
    namespace: common
    sectionName: http-io
  hostnames:
  - "*.example.io"
  rules:
  - filters:
    - type: RequestRedirect
      requestRedirect:
        scheme: https
        statusCode: 301
        port: 443

After configuring the redirection routes, the next step is to integrate services into this setup. In the provided example, we have successfully exposed two example services, the ArgoCD UI and Keycloak Management UI:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# ArgoCD UI
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: argocd
  namespace: argocd
spec:
  parentRefs:
  - name: example # gateway name
    namespace: common
    sectionName: https-link
  hostnames:
  - "argo.example.link"
  rules:
  - backendRefs:
    - name: argocd-server
      port: 80

---
# Keycloak
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: keycloak
  namespace: keycloak
spec:
  parentRefs:
  - name: example # gateway name
    namespace: common
    sectionName: https-link
  hostnames:
  - "sso.example.link"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /auth
    backendRefs:
    - name: keycloak
      port: 80
  - matches:
    - path:
        type: PathPrefix
        value: /js
    backendRefs:
    - name: keycloak
      port: 80
  - matches:
    - path:
        type: PathPrefix
        value: /realms
    backendRefs:
    - name: keycloak
      port: 80
  - matches:
    - path:
        type: PathPrefix
        value: /resources
    backendRefs:
    - name: keycloak
      port: 80
  - matches:
    - path:
        type: Exact
        value: /robots.txt
    backendRefs:
    - name: keycloak
      port: 80
  - matches:
    - path:
        type: PathPrefix
        value: /admin
    backendRefs:
    - name: keycloak
      port: 80

The configuration for each service, ArgoCD and Keycloak, is distinctively tailored to their specific requirements, particularly in terms of hostname, path matching, and backend references.

  • Hostnames and DNS: For both services, unique hostnames are set, corresponding to the DNS records. This ensures that each service is correctly routed through the domain management system.

  • Path Matching Rules:

    • ArgoCD: Since the intention is to expose all paths of the ArgoCD UI, there are no specific path matching rules defined. This approach simplifies the configuration and ensures full accessibility.

    • Keycloak: Contrarily, Keycloak requires more granular control. Therefore, different path matching rules are employed, including both PathPrefix and Exact path matching. This allows for selective exposure of Keycloak’s paths, enhancing security and control.

  • HTTPS-only Connections: In both instances, the services are configured to route through the Gateway’s https-link listener. This configuration is a deliberate choice to ensure that only HTTPS connections are allowed, thereby maintaining a high level of security.

  • Backend References: The backendRefs section is crucial as it specifies the service name and port. Since the HTTPRoutes are deployed in the same namespace as their respective services, the system efficiently locates and connects to the correct service. This namespace-based organization streamlines the routing process within the Kubernetes environment.

This post extends the previous discussion on configuring Cilium and the Gateway API in Kubernetes, focusing on leveraging these tools for external access to applications. It highlights the importance of CiliumLoadBalancerIPPool and CiliumL2AnnouncementPolicy in managing external IPs and Layer 2 network announcements, crucial for effective Load Balancer operation. The integration with the Gateway API enhances ingress traffic management, automating Load Balancer service creation and IP allocation. This post provides a practical example of setting up these resources, emphasizing their role in establishing a robust and dynamic network infrastructure for Kubernetes applications. The configuration details for both Cilium and the Gateway API, including specific Gateway and HTTPRoute resources, are meticulously explained, showcasing a sophisticated approach to routing and security in Kubernetes.