February 28, 2018
After months of testing we recently moved a Ruby on Rails application to production that is using Kubernetes cluster.
In this article we will discuss how to setup path based routing for a Ruby on Rails application in kubernetes using HAProxy ingress.
This post assumes that you have basic understanding of Kubernetes terms like pods, deployments, services, configmap and ingress.
Typically our Rails app has services like unicorn/puma, sidekiq/delayed-job/resque, Websockets and some dedicated API services. We had one web service exposed to the world using load balancer and it was working well. But as the traffic increased it became necessary to route traffic based on URLs/path.
However Kubernetes does not supports this type of load balancing out of the box. There is work in progress for alb-ingress-controller to support this but we could not rely on it for production usage as it is still in alpha.
The best way to achieve path based routing was to use ingress controller.
We researched and found that there are different types of ingress available in k8s world.
We experimented with nginx-ingress and HAProxy and decided to go with HAProxy. HAProxy has better support for Rails websockets which we needed in the project.
We will walk you through step by step on how to use haproxy ingress in a Rails app.
Here is what we are going to do.
Now let's build Rails application deployment manifest for services like web(unicorn),background(sidekiq), Websocket(ruby thin),API(dedicated unicorn).
Here is our web app deployment and service template.
---
apiVersion: v1
kind: Deployment
metadata:
name: test-production-web
labels:
app: test-production-web
namespace: test
spec:
template:
metadata:
labels:
app: test-production-web
spec:
containers:
- image: <your-repo>/<your-image-name>:latest
name: test-production
imagePullPolicy: Always
env:
- name: POSTGRES_HOST
value: test-production-postgres
- name: REDIS_HOST
value: test-production-redis
- name: APP_ENV
value: production
- name: APP_TYPE
value: web
- name: CLIENT
value: test
ports:
- containerPort: 80
imagePullSecrets:
- name: registrykey
---
apiVersion: v1
kind: Service
metadata:
name: test-production-web
labels:
app: test-production-web
namespace: test
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: test-production-web
Here is background app deployment and service template.
---
apiVersion: v1
kind: Deployment
metadata:
name: test-production-background
labels:
app: test-production-background
namespace: test
spec:
template:
metadata:
labels:
app: test-production-background
spec:
containers:
- image: <your-repo>/<your-image-name>:latest
name: test-production
imagePullPolicy: Always
env:
- name: POSTGRES_HOST
value: test-production-postgres
- name: REDIS_HOST
value: test-production-redis
- name: APP_ENV
value: production
- name: APP_TYPE
value: background
- name: CLIENT
value: test
ports:
- containerPort: 80
imagePullSecrets:
- name: registrykey
---
apiVersion: v1
kind: Service
metadata:
name: test-production-background
labels:
app: test-production-background
namespace: test
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: test-production-background
Here is websocket app deployment and service template.
---
apiVersion: v1
kind: Deployment
metadata:
name: test-production-websocket
labels:
app: test-production-websocket
namespace: test
spec:
template:
metadata:
labels:
app: test-production-websocket
spec:
containers:
- image: <your-repo>/<your-image-name>:latest
name: test-production
imagePullPolicy: Always
env:
- name: POSTGRES_HOST
value: test-production-postgres
- name: REDIS_HOST
value: test-production-redis
- name: APP_ENV
value: production
- name: APP_TYPE
value: websocket
- name: CLIENT
value: test
ports:
- containerPort: 80
imagePullSecrets:
- name: registrykey
---
apiVersion: v1
kind: Service
metadata:
name: test-production-websocket
labels:
app: test-production-websocket
namespace: test
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: test-production-websocket
Here is API app deployment and service info.
---
apiVersion: v1
kind: Deployment
metadata:
name: test-production-api
labels:
app: test-production-api
namespace: test
spec:
template:
metadata:
labels:
app: test-production-api
spec:
containers:
- image: <your-repo>/<your-image-name>:latest
name: test-production
imagePullPolicy: Always
env:
- name: POSTGRES_HOST
value: test-production-postgres
- name: REDIS_HOST
value: test-production-redis
- name: APP_ENV
value: production
- name: APP_TYPE
value: api
- name: CLIENT
value: test
ports:
- containerPort: 80
imagePullSecrets:
- name: registrykey
---
apiVersion: v1
kind: Service
metadata:
name: test-production-api
labels:
app: test-production-api
namespace: test
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: test-production-api
Let's launch this manifest using kubectl apply
.
$ kubectl apply -f test-web.yml -f test-background.yml -f test-websocket.yml -f test-api.yml
deployment "test-production-web" created
service "test-production-web" created
deployment "test-production-background" created
service "test-production-background" created
deployment "test-production-websocket" created
service "test-production-websocket" created
deployment "test-production-api" created
service "test-production-api" created
Once our app is deployed and running we should create HAProxy ingress. Before that let's create a tls secret with our SSL key and certificate.
This is also used to enable HTTPS for app URL and to terminate it on L7.
$ kubectl create secret tls tls-certificate --key server.key --cert server.pem
Here server.key
is our SSL key and server.pem
is our SSL certificate in pem
format.
Now let's Create HAProxy controller resources.
For all the available configuration parameters from HAProxy refer here.
apiVersion: v1
data:
dynamic-scaling: "true"
backend-server-slots-increment: "4"
kind: ConfigMap
metadata:
name: haproxy-configmap
namespace: test
Deployment template for the Ingress controller with at-least 2 replicas to manage rolling deploys.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: test
spec:
replicas: 2
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress:v0.5-beta.1
args:
- --default-backend-service=$(POD_NAMESPACE)/test-production-web
- --default-ssl-certificate=$(POD_NAMESPACE)/tls-certificate
- --configmap=$(POD_NAMESPACE)/haproxy-configmap
- --ingress-class=haproxy
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1936
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
Notable fields in above manifest are arguments passed to controller.
--default-backend-service
is the service when No rule is matched your request
will be served by this app.
In our case it is test-production-web
service, But it can be custom 404 page
or whatever better you think.
--default-ssl-certificate
is the SSL secret we just created above this will
terminate SSL on L7 and our app is served on HTTPS to outside world.
This is the LoadBalancer
type service to allow client traffic to reach our
Ingress Controller.
LoadBalancer has access to both public network and internal Kubernetes network while retaining the L7 routing of the Ingress Controller.
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: test
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: stat
port: 1936
protocol: TCP
targetPort: 1936
selector:
run: haproxy-ingress
Now let's apply all the manifests of HAProxy.
$ kubectl apply -f haproxy-configmap.yml -f haproxy-deployment.yml -f haproxy-service.yml
configmap "haproxy-configmap" created
deployment "haproxy-ingress" created
service "haproxy-ingress" created
Once all the resources are running get the LoadBalancer endpoint using.
$ kubectl -n test get svc haproxy-ingress -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
haproxy-ingress LoadBalancer 100.67.194.186 a694abcdefghi11e8bc3b0af2eb5c5d8-806901662.us-east-1.elb.amazonaws.com 80:31788/TCP,443:32274/TCP,1936:32157/TCP 2m run=ingress
Once we have ELB endpoint of ingress service, map the DNS with URL like
test-rails-app.com
.
Now after doing all the hard work it is time to configure ingress and path based rules.
In our case we want to have following rules.
https://test-rails-app.com requests to be served by test-production-web
.
https://test-rails-app.com/websocket requests to be served by
test-production-websocket
.
https://test-rails-app.com/api requests to be served by test-production-api
.
Let's create a ingress manifest defining all the rules.
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: test
spec:
tls:
- hosts:
- test-rails-app.com
secretName: tls-certificate
rules:
- host: test-rails-app.com
http:
paths:
- path: /
backend:
serviceName: test-production-web
servicePort: 80
- path: /api
backend:
serviceName: test-production-api
servicePort: 80
- path: /websocket
backend:
serviceName: test-production-websocket
servicePort: 80
Moreover there are Ingress Annotations for adjusting configuration changes.
As expected,now our default traffic on /
is routed to test-production-web
service.
/api
is routed to test-production-api
service.
/websocket
is routed to test-production-websocket
service.
Thus ingress implementation solves our purpose of path based routing and terminating SSL on L7 on Kubernetes.
If this blog was helpful, check out our full blog archive.