Introduction ¶
I really love the charts from Bitnami; they are well-written and quite stable. Among other things, I always deploy Redis from their chart. Everything would be fine if you just needed Redis without any High Availability, but that’s not always the case.
What’s the difficulty in setting up Redis HA?
The issue is that the connection drivers to Redis in client services must support Redis Sentinel. When we set up a cluster with Redis, a container with Sentinel is also launched—it’s a kind of controller for the cluster that will replace the master if the current one goes down. It’s very similar to a Mongo Arbiter. Therefore, the connection mechanism changes because of this Sentinel.
- Connect to Sentinel
- Request the master’s address
- Connect to the master
- If the master is unavailable, go back to Sentinel
Very often, applications do not support this. So what should you do in such a case? I decided that, as always, workarounds would help.
Below is a fragment of values.yaml that I use to launch bitnami/redis using Helmfile.
What’s going on there?
Among other things, I create an additional sidecar container that requests the master’s address from Sentinel every N seconds and updates the service if it differs from the one created as an ExternalName type. Thus, we can simply point applications that do not know how to communicate with Sentinel to this service, and everything will work since the service will always point to the correct pod.
This sidecar container consumes about 65 mCPU and 1 MiB. Of course, you could build an image and include redis-cli to connect directly, but I don’t want to maintain and store this image or update it.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
| # ┬─┐┬─┐┬─┐o┐─┐
# │┬┘├─ │ ││└─┐
# ┘└┘┴─┘┘─┘┘──┘
{{- $name := "redis" }}
{{- $masterServiceName := printf "%s-master" $name }}
nameOverride: {{ $name }}
fullnameOverride: {{ $name }}
architecture: replication
auth:
enabled: true
password: {{ quote .Values.secrets.redisPassword }}
replica:
replicaCount: 3
resources:
limits:
memory: 75Mi
requests:
cpu: 50m
memory: 25Mi
persistence:
enabled: true
storageClass: standard
size: 1Gi
podAntiAffinityPreset: hard
sidecars:
- name: master-service-watchdog
image: '{{`{{ printf "bitnami/kubectl:%s.%s" .Capabilities.KubeVersion.Major .Capabilities.KubeVersion.Minor }}`}}'
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-ec"]
args:
- |
while sleep 5; do
MASTER_ADDRESS="$(kubectl exec -c sentinel "${POD_NAME}" -- sh -c "REDISCLI_AUTH='$REDISCLI_AUTH' redis-cli -p '${REDIS_SENTINEL_PORT}' SENTINEL get-master-addr-by-name this" | head -1)"
if ! kubectl get svc "${MASTER_SERVICE}" -o jsonpath='{.spec.externalName}' | grep -qF "${MASTER_ADDRESS}"; then
echo "info: master address has changed to ${MASTER_ADDRESS}"
jq -Mnr --arg address "${MASTER_ADDRESS}" '[{op:"replace",path:"/spec/externalName",value:$address}]' | \
xargs -0 kubectl patch svc "${MASTER_SERVICE}" --type=json --patch
fi
done
env:
- name: MASTER_SERVICE
value: {{ quote $masterServiceName }}
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: REDISCLI_AUTH
valueFrom:
secretKeyRef:
name: '{{`{{ template "common.names.fullname" . }}`}}'
key: redis-password
- name: REDIS_SENTINEL_PORT
value: '{{`{{ .Values.sentinel.containerPorts.sentinel }}`}}'
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
privileged: false
allowPrivilegeEscalation: false
capabilities:
drop: [ALL]
seccompProfile:
type: RuntimeDefault
serviceAccount:
create: true
sentinel:
enabled: true
masterSet: this
automateClusterRecovery: true
downAfterMilliseconds: 2000
resources:
limits:
memory: 50Mi
requests:
cpu: 50m
memory: 50Mi
networkPolicy:
enabled: true
extraIngress:
- ports:
- port: 6379
from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: {{ .Release.Namespace }}
rbac:
create: true
rules:
# Allow watchdog to update our service
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "update", "patch"]
resourceNames: [{{ quote $masterServiceName }}]
# Allow exec into Redis pods
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
pdb:
create: true
raw:
Service:
apiVersion: v1
kind: Service
metadata:
name: {{ $masterServiceName }}
spec:
type: ExternalName
# 1.1.1.1 is just a stub value that is used for initial deployment
# It will be overridden as soon as the first watchdog is ready
externalName: 1.1.1.1
ports:
- port: 6379
protocol: TCP
targetPort: 6379
name: redis
|