You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A small, statically compiled go binary for use in maintaining the state of a [ProxySQL](https://github.com/sysown/proxysql) cluster, for use as a kubernetes sidecar container. The repo includes a [Dockerfile](build/Dockerfile) to generate an alpine based image.
8
+
The ProxySQL agent is a small, statically compiled go binary for use in maintaining the state of a [ProxySQL](https://github.com/sysown/proxysql) cluster, for use as a kubernetes sidecar container. The repo includes a [Dockerfile](build/Dockerfile) to generate an alpine based image, or you can use the version in the [GitHub Container Registry]().
9
+
10
+
There exists relatively little tooling around ProxySQL, so we hope that this is useful to others out there, even if it's just to learn how to maintain a cluster.
11
+
8
12
9
13
### "Self healing" the ProxySQL cluster
10
14
@@ -17,25 +21,25 @@ Some examples of where this is necessary:
17
21
- If _all_ core pods recycle, the satellite pods will run `LOAD PROXYSQL SERVERS FROM CONFIG` which points them to the `proxysql-core` service, and once the core pods are up the satellites should receive configuration again
18
22
- Note that if your cluster is running fine and the core pods all go away, the satellites will continue to function with the settings they already had; in other words, even if the core pods vanish, you will still serve proxied MySQL traffic as long as the satellites have fetched the configuration once
19
23
24
+
20
25
### Why did you pick golang, if you work at a Ruby shop?
21
26
22
-
I looked into using ruby, and in fact the "agents" we are currently running **are** written in ruby, but there have been some issues:
27
+
We looked into using ruby, and in fact the "agents" we are currently running **are** written in ruby, but there have been some issues:
23
28
24
-
- If the proxysql admin interface gets wedged, the ruby and mysl processes still continue to spawn and spin, which will eventually lead to either inode exhaustion or a container OOM
29
+
- If the ProxySQL admin interface gets wedged, the ruby and mysl processes still continue to spawn and spin, which will eventually lead to either inode exhaustion or a container OOM
25
30
- The scheduler spawns a new ruby process every 10s
26
31
- Each ruby process shells out to the mysql binary several times times per script invocation
27
32
- In addition to the scheduler process, the health probes is a separate ruby script that also spawns several mysql processes per run
28
33
- Two script invocations every 10s, one for liveness and one for readiness
29
34
30
-
31
-
We wanted to avoid having to install a bunch of ruby gems in the container, so we decided shelling out to mysql was fine; we got most of the patterns from existing ProxySQL tooling and figured it'd work short term. The ruby has worked fine, though there have been enough instances of OOM'd containers that it's become worrisome. This usually happens if someone is in a pod doing any kind of work (modifying mysql query rules, etc), but we haven't been able to figure out what causes the admin interface to become wedged.
35
+
We wanted to avoid having to install a bunch of ruby gems in the container, so we decided shelling out to mysql was fine; we got most of the patterns from existing ProxySQL tooling and figured it'd work short term. And it has worked fine, though there have been enough instances of OOM'd containers that it's become worrisome. This usually happens if someone is in a pod doing any kind of work (modifying mysql query rules, etc), but we haven't been able to figure out what causes the admin interface to become wedged.
32
36
33
37
Because k8s tooling is generally written in Golang, the ruby k8s gems didn't seem to be as maintained or as easy to use as the golang libraries. And because the go process is statically compiled, and we won't need to deal with a bunch of external dependencies at runtime.
34
38
35
39
36
40
## Design
37
41
38
-
In the [example repo](https://github.com/kuzmik/local-proxysql), there are two separate deployments; the `core` and the `satellite` deployments. The agent is responsible for maintaining this cluster.
42
+
In the [example repo](https://github.com/kuzmik/local-proxysql), there are two separate deployments; the `core` and the `satellite` deployments. The agent is responsible for maintaining the state of this cluster.
39
43
40
44

41
45
@@ -44,11 +48,6 @@ On boot, the agent will connect to the ProxySQL admin interface on `127.0.0.1:60
44
48
Additionally, the agent also exposes a simple HTTP API used for k8s health checks for the pod, as well as the /shutdown endpoint, which can be used in a `container.lifecycle.preStop.httpGet` hook to gracefully drain traffic from a pod before stopping it.
45
49
46
50
47
-
## Status - Beta
48
-
49
-
This is currently in beta, but we are running it in production.
50
-
51
-
52
51
## TODOs
53
52
54
53
There are some internal linear tickets, but here's a high level overview of what we have in mind.
@@ -74,24 +73,28 @@ There are some internal linear tickets, but here's a high level overview of what
74
73
rpc error: code = Unknown desc = Error: No such container: e3153c34e0ad525c280dd26695b78d917b1cb377a545744bffb9b31ad1c90670%
75
74
```
76
75
77
-
### MVP Requirements
78
-
79
-
1. ✅ Cluster management (ie: core and satellite agents)
80
-
1. ✅ Health checks via an HTTP endpoint, specifically for the ProxySQL container
81
-
1. ✅ Pre-stop hook replacement
82
-
83
76
### Done
84
77
85
78
- *P1* - ~~Dump the contents of `stats_mysql_query_digests` to a file on disk; will be used to get the data into snowflake. File format is CSV~~
86
79
- *P1* - ~~Health checks; replace the ruby health probe with this~~
87
80
- *P2* - ~~Replace the pre-stop ruby script with this~~
88
81
89
82
83
+
### MVP Requirements
84
+
85
+
1. ✅ Cluster management (ie: core and satellite agents)
86
+
1. ✅ Health checks via an HTTP endpoint, specifically for the ProxySQL container
87
+
1. ✅ Pre-stop hook replacement
88
+
89
+
90
90
## Releasing a new version
91
91
92
-
1. Update version in Makefile (and anywhere that calls `go build`, like pipelines)
93
-
1. Update the CHANGELOG.md with the changes
92
+
We are using [goreleaser](https://goreleaser.com/), so it's as simple as pushing to a new tag:
93
+
94
+
1. `git tag vX.X.X`
95
+
1. `git push origin vX.X.X`
94
96
97
+
This will cause goreleaser to run and output the artifacts; currently we are only shipping a linuix amd64 binary and a Docker image.
0 commit comments