Ingress Nginx Multi Part Upload Session Affinity
Last updated 2nd March 2022.
Objective
Gummy sessions or session affinity, is a feature that allows you lot to keep a session alive for a certain period of time. In a Kubernetes cluster, all the traffic from a client to an application, even if you calibration from one to three or more replicas, will be redirected to the same pod.
In this tutorial we are going to:
- deploy an application on your OVHcloud Managed Kubernetes cluster through a
deployment
with several replicas - setup an Nginx Ingress
- deploy an
Ingress
to configure the Nginx Ingress Controller to use sticky sessions/session affinity - exam the session affinity
Earlier you begin
This tutorial presupposes that you lot already have a working OVHcloud Managed Kubernetes cluster, and some basic knowledge of how to operate it. If you want to know more than on those topics, please look at the OVHcloud Managed Kubernetes Service Quickstart.
You also need to have Helm installer on your workstation and your cluster. Please refer to the How to install Helm on OVHcloud Managed Kubernetes Service tutorial.
Instructions
Deploying the application
In this guide you will deploy an application, in Golang, that runs a HTTP server and displays the Pod proper noun.
This kind of application will allow y'all to validate that Nginx Ingress correctly maintains the session.
First, create a deployment.yml
file with the following content:
apiVersion : apps/v1 kind : Deployment metadata : proper noun : what-is-my-pod-deployment labels : app : what-is-my-pod spec : replicas : three selector : matchLabels : app : what-is-my-pod template : metadata : labels : app : what-is-my-pod spec : containers : - name : what-is-my-pod image : ovhplatform/what-is-my-pod:one.0.1 ports : - containerPort : 8080 env : - name : MY_POD_NAME valueFrom : fieldRef : fieldPath : metadata.name
This YAML deployment manifest file defines that our application, based on ovhplatform/what-is-my-pod:1.0.1
epitome will be deployed with 3 replicas (3 pods). We pass the pod proper noun on surroundings variable in order to display information technology in our what-is-my-pod
awarding.
And so, create a svc.yml
file with the following content to define our service (a service exposes a deployment):
apiVersion : v1 kind : Service metadata : labels : app : what-is-my-pod name : what-is-my-pod spec : ports : - port : 8080 selector : app : what-is-my-pod
Apply the deployment and service manifest files to your cluster with the post-obit commands:
kubectl utilize -f deployment.yml kubectl apply -f svc.yml
Output should exist like this:
$ kubectl apply -f deployment.yml deployment.apps/what-is-my-pod-deployment created $ kubectl apply -f svc.yml service/what-is-my-pod created
You lot can verify if your application is running and service is created by running the following commands:
kubectl get pod -l app =what-is-my-pod kubectl get svc -l app =what-is-my-pod
Output should be like this:
$ kubectl get pod -l app=what-is-my-pod Proper noun Gear up Status RESTARTS AGE what-is-my-pod-deployment-78f7cd684f-5gtf9 1/1 Running 0 3m what-is-my-pod-deployment-78f7cd684f-k2zpp one/1 Running 0 3m what-is-my-pod-deployment-78f7cd684f-xvwvh 1/1 Running 0 3m $ kubectl get svc -l app=what-is-my-pod NAME Type CLUSTER-IP EXTERNAL-IP PORT(South) Age what-is-my-pod ClusterIP 10.three.57.203 8080/TCP 3m35s
Installing the Nginx Ingress Controller Captain chart
For this tutorial, we are using the Nginx Ingress Controller Helm chart found on its ain Captain repository.
The chart is fully configurable, just hither we are using the default configuration.
Add the Ingress Nginx Helm repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update
These commands will add the Ingress Nginx Helm repository to your local Helm chart repository and update the installed chart repositories:
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update "ingress-nginx" has been added to your repositories Hang tight while we catch the latest from your chart repositories... ...Successfully got an update from the "nvidia" chart repository ... ...Successfully got an update from the "ingress-nginx" chart repository ... Update Complete. ⎈Happy Helming!⎈
Install the latest version of Ingress Nginx with helm install
command:
captain -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace
The install procedure volition begin and a new ingress-nginx
namespace will exist created.
$ captain -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace NAME: ingress-nginx LAST DEPLOYED: Mon Feb 28 16:04:05 2022 NAMESPACE: ingress-nginx Status: deployed REVISION: 1 TEST SUITE: None NOTES: The ingress-nginx controller has been installed. It may take a few minutes for the LoadBalancer IP to exist bachelor. You can sentry the status past running 'kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller' An example Ingress that makes use of the controller: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example namespace: foo spec: ingressClassName: nginx rules: - host: www.example.com http: paths: - backend: service: name: exampleService port: number: 80 path: / # This section is only required if TLS is to exist enabled for the Ingress tls: - hosts: - www.example.com secretName: example-tls If TLS is enabled for the Ingress, a Hugger-mugger containing the certificate and central must also exist provided: apiVersion: v1 kind: Secret metadata: proper noun: instance-tls namespace: foo data: tls.crt: tls.cardinal: blazon: kubernetes.io/tls
As the LoadBalancer
creation is asynchronous, and the provisioning of the load balancer tin can take several minutes, you will surely become a <pending>
EXTERNAL-IP
.
If you lot endeavor again in a few minutes you should get an EXTERNAL-IP
:
$ kubectl go svc -north ingress-nginx ingress-nginx-controller NAME Blazon CLUSTER-IP EXTERNAL-IP PORT(S) Historic period ingress-nginx-controller LoadBalancer x.3.232.157 152.228.168.132 80:30903/TCP,443:31546/TCP 19h
Y'all can so access your nginx-ingress
at http://[YOUR_LOAD_BALANCER_IP]
via HTTP or https://[YOUR_LOAD_BALANCER_IP]
via HTTPS.
Configuring the Nginx Ingress Controller to employ pasty sessions/session affinity
At this footstep, yous need to deploy an Ingress resources and configure information technology to use the viscid sessions.
Create an ingress-session-analogousness.yml
file with the following content:
apiVersion : networking.k8s.io/v1 kind : Ingress metadata : annotations : kubernetes.io/ingress.course : nginx nginx.ingress.kubernetes.io/analogousness : "cookie" nginx.ingress.kubernetes.io/session-cookie-name : "stickounet" nginx.ingress.kubernetes.io/session-cookie-expires : "172800" nginx.ingress.kubernetes.io/session-cookie-max-age : "172800" proper noun : ingress namespace : default spec : rules : - http : paths : - backend : service : name : what-is-my-pod port : number : 8080 path : / pathType : Prefix
In this manifest file yous tin can come across that we define a Nginx Ingress resources with several annotations. For more information almost the annotations, please refer to the Nginx Ingress Controller documentation.
Utilize the ingress manifest files to your cluster with the following commands:
kubectl use -f ingress-session-affinity.yml
Output should be like this:
$ kubectl use -f ingress-session-affinity.yml ingress.networking.k8s.io/ingress created
You have set-upwardly and configured a Kubernetes Ingress resources that will maintain sessions for users, equally in the illustration below:
Exam the session affinity
The terminal pace of this guide is to admission our application and test the session analogousness.
Execute the following control to retrieve the Load-Balancer IP created by the Nginx Ingress Controller:
kubectl go svc -n ingress-nginx ingress-nginx-controller -o jsonpath = '{.status.loadBalancer.ingress[0].ip}'
You lot should accept a Load-Balancer IP similar this:
$ kubectl go svc -north ingress-nginx ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}' 152.228.168.143
Now you can admission this IP through your favorite browser and reload the page several times:
Everytime y'all reload the page, you should go the same cookie value, and then the Ingress redirects you to the same Pod.
You can as well test the behavior with whorl
control similar this:
curl --cookie cookie.txt --cookie-jar cookie.txt http://152.228.168.143
You can execute the aforementioned command several times in a loop to validate that the session is correctly maintained:
$ for i in {0..5} practice curl --cookie cookie.txt --cookie-jar cookie.txt http://152.228.168.143 echo "" done Hello "what-is-my-pod-deployment-78f7cd684f-xvwvh"! Hello "what-is-my-pod-deployment-78f7cd684f-xvwvh"! Hullo "what-is-my-pod-deployment-78f7cd684f-xvwvh"! Hello "what-is-my-pod-deployment-78f7cd684f-xvwvh"! Hello "what-is-my-pod-deployment-78f7cd684f-xvwvh"! Hello "what-is-my-pod-deployment-78f7cd684f-xvwvh"!
The tips with using curl
with cookies is to store the received cookie in a file and read back the cookies from that file later.
Become further
Join our community of users on https://customs.ovh.com/en/.
Did you observe this guide useful?
Please feel gratuitous to give any suggestions in order to amend this documentation.
Whether your feedback is most images, content, or construction, please share information technology, and then that we can improve it together.
Your support requests will not be processed via this form. To do this, please apply the "Create a ticket" grade.
Give thanks yous. Your feedback has been received.
These guides might also interest yous...
Source: https://docs.ovh.com/sg/en/kubernetes/sticky-session-nginx-ingress/
0 Response to "Ingress Nginx Multi Part Upload Session Affinity"
Postar um comentário