Deploying MySQL on Kubernetes with a Percona-based Operator

by Flavius Mecea

In the context of providing managed WordPress hosting services, at Presslabs we operate with lots of small to medium-sized databases, in a DB-per-service model, as we call it. The workloads are mostly reads, so we need to efficiently scale that. The MySQL® asynchronous replication model fits the bill very well, allowing us to scale horizontally from one server—with the obvious availability pitfalls—to tens of nodes. The next release of the stack is going to be open-sourced.

Kubernetes MySQL Operator

As we were already using Kubernetes, we were looking for an operator that could automate our DB deployments and auto-scaling. Those available were doing synchronous replication using MySQL group replication or Galera-based replication. Therefore, we decided to write our own operator.

Solution architecture

The MySQL operator, released under Apache 2.0 license, is based on Percona Server for MySQL for its operational improvements —like utility user and backup locks—and relies on the tried and tested Orchestrator to do the automatic failovers. We’ve been using Percona Server in production for about four years, with very good results, thus encouraging us to continue implementing it in the operator as well.

The MySQL Operator-Orchestrator integration is highly important for topology, as well as for cluster healing and system failover. Orchestrator is a MySQL high availability and replication management tool that was coded and opened by GitHub.

As we’re writing this, the operator is undergoing a full rewrite to implement the operator using the Kubebuilder framework, which is a pretty logical step to simplify and standardize the operator to make it more readable to contributors and users.

Aims for the project

We’ve built the MySQL operator with several considerations in mind, generated by the needs that no other operator could satisfy at the time we started working on it, last year.

Here are some of them:

  • Easily deployable MySQL clusters in Kubernetes, following the cluster-per-service model
  • DevOps-friendly, critical to basic operations such as monitoring, availability, scalability, and backup stories
  • Out-of-the-box backups, scheduled or on-demand, and point-in-time recovery
  • Support for cloning, both inside a cluster and across clusters

It’s good to know that the MySQL operator is now in beta version, and can be tested in production workloads. However, you can take a spin and decide for yourself—we’re already successfully using it for a part of our production workloads at Presslabs, for our customer dashboard services.

Going further to some more practical info, we’ve successfully installed and tested the operator on AWS, Google Cloud Platform, and Microsoft Azure and covered the step by step process in three tutorials here.

Set up and configuration

It’s fairly simple to use the operator. Prerequisites would be the ubiquitous Helm and Kubectl.

The first step is to install the controller. Two commands should be run, to make use of the Helm chart bundled in the operator:

$ helm repo add presslabs
$ helm install presslabs/mysql-operator --name mysql-operator

These commands will deploy the controller together with an Orchestrator cluster. The configuration parameters of the Helm chart for the operator and its default values are as follows:

ParameterDescriptionDefault value
replicaCountreplicas for controller1
imagecontroller container
imagePullPolicycontroller image pull policyIfNotPresent
helperImagemysql helper
installCRDswhether or not to install CRDStrue
resourcescontroller pod resources{}
nodeSelectorcontroller pod nodeSelector{}
tolerationscontroller pod tolerations{}
affinitycontroller pod affinity{}
extraArgsargs that are passed to controller[]
rbac.createwhether or not to create rbac service account, role and roleBindingtrue
rbac.serviceAccountNameIf rbac.create is false then this service account is useddefault
orchestrator.replicasControl Orchestrator replicas3
orchestrator.imageOrchestrator container

Further Orchestrator values can be tuned by checking the values.yaml config file.

Cluster deployment

The next step is to deploy a cluster. For this, you need to create a Kubernetes secret that contains MySQL credentials (root password, database name, user name, user password), to initialize the cluster and a custom resource MySQL cluster as you can see below:

An example of a secret (example-cluster-secret.yaml):

apiVersion: v1
kind: Secret
 name: my-secret
type: Opaque

 ROOT_PASSWORD: # root password, base_64 encoded

An example of simple cluster (example-cluster.yaml):

kind: MysqlCluster
 name: my-cluster
 replicas: 2
 secretName: my-secret

The usual kubectl commands can be used to do various operations, such as a basic listing:

$ kubectl get mysql

or detailed cluster information:

$ kubectl describe mysql my-cluster


A further step could be setting up the backups on an object storage service. To create a backup is as simple as creating a MySQL Backup resource that can be seen in this example (example-backup.yaml):

kind: MysqlBackup
 name: my-cluster-backup
 clusterName: my-cluster

 backupUri: gs://bucket_name/path/to/backup.xtrabackup.gz

 backupSecretName: my-cluster-backup-secret

To provide credentials for a storage service, you have to create a secret and specify your credentials to your provider; we currently support AWS, GCS or HTTP as in this example (example-backup-secret.yaml):

apiVersion: v1
kind: Secret
 name: my-cluster-backup-secret
type: Opaque

 # AWS
 AWS_ACCESS_KEY_ID: #add here your key, base_64 encoded
 AWS_SECRET_KEY: #and your secret, base_64 encoded

 # or Google Cloud base_64 encoded

 # GCS_SERVICE_ACCOUNT_JSON_KEY: #your key, base_64 encoded
 # GCS_PROJECT_ID: #your ID, base_64 encoded

Also, recurrent cluster backups and cluster initialization from a backup are some additional operations you can opt for. For more details head for our documentation page.

Further operations and new usage information are kept up-to-date on the project homepage.

Our future plans include developing the MySQL operator and integrating it with Percona Management & Monitoring for better exposing the internals of the Kubernetes DB cluster.

Open source community

Community contributions are highly appreciated; we should mention the pull requests from Platform9, so far, but also the sharp questions on the channel we’ve opened on Gitter, for which we do the best to answer in detail, as well as issue reports from early users of the operator.

Come and talk to us about the project

Along with my colleague Calin Don, I’ll be talking about this at Percona Live Europe in November. It would be great to have the chance to meet other enthusiasts and talk about what we’ve discovered so far!

The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and **always **secure a working back up.

Flavius Mecea

Flavius Mecea is a software and operations engineer at Presslabs. A vivid builder and avid reader, Flavius is always up to something—exploring untapped possibilities using Kubernetes & Go, mentoring students into the secrets of algorithms and DevOps or playing with hardware and machine learning. He’s a high-spirited person and an all-time helper that always meets a new day with curiosity.

See all posts by Flavius Mecea »


We invite you to our forum for discussion. You are welcome to use the widget below.

✎ Edit this page on GitHub