Installing and Setting Up MySQL on Kubernetes - Percona Community MySQL Live Stream & Chat - August 19th

Percona Community MySQL Live Stream August 19th

Start running MySQL on Kubernetes with the lead of Percona Experts. Dave and Marcos were at the Community Live stream to share some tips and tricks.

VIDEO

TRANSCRIPT

Marcos Albe:
Good morning, Dave. Good morning.

Dave Stokes: So today we’re talking about installing and setting up MySQL on Kubernetes. And yes, I am fumbling around the basics of Kubernetes. And you’re going to enlighten us and make us all experts.

Marcos Albe:
I will, I will do my best I am. I wouldn’t say I’m coordinator, word experts or anything like that. And like with many other things, I am better at troubleshooting and finding problems than actually installing and setting up. But I will do my best. We’re going to be looking at simple deployment using one of our operators, which is the only way I will recommend deploying databases in Kubernetes. You know, having a good assistant, like our operators, it’s a great idea. Like, you don’t want to do this manually, too many pieces, too many steps, and the operator is gonna take care of a lot of those and make things more consistent and more enjoyable. So before we do any Kubernetes work, we need Kubernetes. And so let me share my screen to start screen. There it is sure. Do you see me? Is that readable? Yes. All right. So in our, in our little support world, we do quite a bit of Kubernetes. troubleshooting, to, you know, easily and quickly have access to a Kubernetes cluster that we know it’s in a clean state, you know, that we can know for sure in which state it is, what we do is we use a tool written by our colleague, Nikolai, which is called any DB where you can actually find the link to the tool in the chat in our chat. Bot buddies, this is GitHub under his repo. It’s haddock, and the tool is called entity River. This is a very complete tool if you’re into database testing. And you know, you can do a lot of stuff. including deploying Kubernetes cluster based on rancher scale three, yes. And that’s what I’ve done here. I deployed one, and I was deploying one more. But you know, I’m getting some issue here. I don’t know why. Okay, this one was a secondary cluster, I’m gonna try to set up one more, just for in case anything goes wrong, I will have a secondary cluster where we can fall back and give it a second try, you know, again, things can fail in computers. So very simple. It’s a massive script that is going to take care of everything for you. And you can deploy Mongo, MySQL, Docker, PMM, or whatever else you want to test, you can, you know, deploy LDAP and everything else you need to actually test databases. So very handy. Highly recommended, if you are into testing stuff, so I already set up my other cluster here. So if I do any deliver list, I see something deleted my notes. Oh, wow. Okay. I, well, this was my mistake, I forgot to actually give it a second name. So I just accidentally destroyed my environment. It’s going to take like three minutes or so. And you will see the process is very simple. In the meantime, let me tell you a bit about our operator. Our operator, it’s already like two years into development or so. So it’s it has matured a lot. It’s not an operator for Percona Server that one is still not GA so we do have Percona Operator for Percona Server, the one we’re going to be looking at today is the one that has been developed for a longer time, which is the one for Percona XtraDB Cluster. This create basic PXC cluster three nodes be because it doesn’t make sense to run a three-node cluster in a single hardware node. The operator has anti-affinity rules that will prevent the notes the PXC notes from running on the same hardware nodes. So this only works if you have at least three Kubernetes worker nodes. Keep that in mind. If you’re doing mini cube, and you want to do testing with mini cube, instead of using k three s like I’m doing today, you can evade the anti-affinity rules and let it run on a single node. Again, strongly not recommended. So try to use something else other than mini cube to make it more realistic. The operator has the functionality to do other things like Oh, well. Sorry, just one second. Okay. This one second, I’m sorry. This is quite unsettling. Let me try it one more time. And hopefully, it will go okay, like I had all this done. And I just destroyed it. I’m super sorry for the delay, I was telling you, the operator has the functionality to actually deploy the cluster, and upgrade the cluster. It also has the functionality to do backups, it will stream backups to s3, it can do incremental backups. It will also take care of keeping your pin looking for point-in-time recovery. It can do full backup recovery, it can do point-in-time recovery. And you can also have multiple data centers. So you can have a data center with one Kubernetes cluster and another data center with another Kubernetes cluster. And you can deploy your PXC clusters on both and then have an a synchronous replication between them. So it’s a really, really handy tool that already encapsulates and abstracts a lot of the most popular requests we get for deploying and operating DC there are some basic hardware requirements. And you know, these are in our documents, we specified that resource limits could not be less than two gigs of RAM, and two CPU threads per node with at least 60 gigabytes of storage. In all honesty, that’s not truly enough for unless you’re doing testing or unless you have like 100 queries per day. But if you have any relatively serious deployment, right, you need to give resources like you will do any other database because it runs in Kubernetes is not magically more optimized. So normally, like if you already have your database is running outside Kubernetes looking at the resources given to those instances is a good starting point. So if you already use I don’t know 16 gigabytes machines, then deploy 16 gigabyte pots. That is again, one thing we see very often with new customers that when they start deploying things in Kubernetes they will deploy very small pots, you know, like one gigabyte of RAM and how fast CPU and BFC is very sensitive to performance. So if a node has performance issues And it falls behind with traffic, it will be evicted from the cluster. So you don’t want your PFC nodes to suffer performance. And so you have to give it enough hardware resources to work. So with that in mind, you know, the other thing that it’s a requirement is running. Recent enough Kubernetes version, minimum, the bare bare minimum is 1.9, or OpenShift, 3.9. So, I assume everybody will be running on Kubernetes. That is newer than that. But anyway, that’s the minimum requirement. And other than that, there are no other hard requirements, right? Like, those are basically it, you just need access to get. So I mean, you need access to the internet from your servers, right to pull the operator from GitHub, and to pull the images from our repos. And that’s it, like the rest. It’s contained in the Kubernetes cluster. So this is halfway done, I hope. And I hope he’s not going to show us any other surprises. I apologize for the slow start. And well, you can see this guy is done with node one started with no two. So it takes a couple of minutes. But it gives you a working Kubernetes cluster, which I think is really cool. If you’re learning Kubernetes this is a way to break it, and, you know, not care and just go back and say I’m just gonna deploy it again and do it over and over. And I think it’s far superior to running mini cube, which I never had a good experience with it. And everything is different because it has limited hardware resources and everything. So this is a more realistic Kubernetes cluster. And it’s all contained within a single hardware instance, but it creates multiple nodes. And so the Kubernetes nodes appear to the Kubernetes scheduler as if they were like running on real hardware and the Kubernetest inside those nodes believes it’s running on real hardware. And that makes things closer to production, which is what you want when you’re testing. So just give us a few more seconds. And in the meantime, I will check if anybody has questions. And I wonder what happened. Normally you can use namespaces and have multiple clusters or multiple deployments. Okay, no, three, we’re almost done. We are almost done with us. Come on, come on, come on. No questions yet. So I wonder how many of our listeners are actually running MySQL or any other database in Kubernetes already? So if you’re running any kind of database in Kubernetes, please let us know what database it is and like what has been your experience. Or if you have questions about it, I’ll be glad to give it a shot. All right, there you go. That’s it. Okay, now I can do any beaver list. And I properly have multiple nodes, that’s great. And I can do any, deliver SSH default, and I will log into my default node, which is a controller node. And here, I can do coop control. But there is nothing created. It’s a totally empty cluster. Another URL, I think, I want Dave to share is the URL for the operator itself. Oops. Yes. Oh, it’s the green HCBS. So be operator, it’s in our GitHub repo. Clearly name it, so you’ll not be able to confuse it. The latest release is 11.0. When you go to our documentation page, the documentation itself is going to give you this line and it’s always going to have the latest version. So you can rely on the docks on this case, doesn’t have outdated stuff. Right. That’s it, here we are a bunch of stuff, the interesting stuff is in this folder. And we are going to be applying those little by little. So first, we’re going to apply the custom resource definition, which will create a type of resource called Pfc. And you only need to do these when creating for the first time or when upgrading the operator. Like if you want to later deploy more instances of the cluster under different namespaces different names or whatever, you don’t need to do this step. Again, this is done once per operator version to say so. So you can see the did cluster backup and restores. So we have all those, we’re going to create a namespace. And we’re gonna make it the active namespace. So this is regular Kubernetes. You know, preamble for doing pretty much any work. Then comes the role-based access control. And, well, if you know Kubernetes security, you will be able to tweak this to your needs. By default, it’s reasonably stringent set of rules. And go ahead and deploy. And finally, we do well not finally, but preamble steps. Now we have an operator running.

And the operator, it’s replica set, you can see. And it’s still creating. So let’s give it a second. Watch, alright, it’s ready and running. Excellent. Don’t like always wait until this guy is up and running. Like if you go for the next steps before these guys running, nothing is going to work. And it’s going to be a mystery. why things don’t work. So make sure you take a look with Google console, get all or Google controls, get bought dash watch. Before you do anything else. Then we have our secrets. So here we have passwords and whatnot. You could go ahead and configure. I’m just not going to do that now. But again, this is where you set the passwords before deploying. And you can see you have four root for the extra backup user for the monitoring user. Cluster check is for H A proxy cluster, check and proxy minutes for proxy SQL admin interface PMM server is for the pm server, the operator is for the operator administrator and replication is for the a synchronous replica password. The synchronous replica user password. Okay. And of course this operator user is the MySQL user for the right. Okay? Then I’m going just to deploy it. I don’t need to do anything else, right?

All right. That’s that. And then we can go ahead, and the next thing you do is deploy CR because of resources resource itself. So here is where you might want to go ahead and delete event things and cluster my things. And you can well, replication channels, this is if you want to do inter inter-cluster, like if you have two clusters if you want to do a synchronous configuration if you want to tweak configuration things, you can go ahead and uncomment here, this is like your My CNF, we’re going to see another way to deploy but a very easy way is to just do it here and set up anything you normally will have in your My CNF. You can do it here. And well basically, there’s a whole heap of stuff you can evade here, but mainly as it is, you can go ahead and deploy and later find out if you need to tune anything if you’re learning about it. One interesting thing is sidecar. So we use this for the agnostic. So you might see these when we when you’re working with us.

And usually, again, you will do much more here like don’t scheme on resources, it will only lead to bad experiences with the operator. So with that, I can go ahead and apply deploys your yellow. Okay, and I can do coop control for this watch. And we can see how it starts creating stuff. By default, we use H A proxy, which is simpler. And I’m not gonna say more reliable, but less easy to miss Configure. And it’s not that that is not true, either. It’s less intrusive. So the proxies should be transparent to your application, while on the other hand actually running, you know, proxy SQL, which has many benefits. By default, it might do unexpected things, right? Like for example, by default ProxySQL will allow a right to go to the writer node and then the read that comes immediately after go to a reader node. And that might break your application if you need full consistency. You can achieve full consistency, but it’s not by default. So, you know, to make to follow the rule of least astonishment, you know, to avoid surprises. As we use H A proxy, which is layer four proxies, so a TCP router, basically, it just pulls TCP packets back and forth between endpoints. So it shouldn’t change anything of how the application behaves. Proxy SQL can change how the application behaves. So we don’t deploy that by default. But in the serial, you can go ahead, and instead of h a proxy, deploy proxy SQL, we might see that in a future presentation. Okay. This takes time, that is the cloud for you, it’s a lot of time that you have to wait.

Okay, so, I will share my very little Kubernetes knowledge. And they tell you ready is telling us how many containers are within this pod? Right, I did control get pod. So, this is a pod, and this is the containers. So, the number of the right is the number of containers we expect to have. And the number on the left is how many of those containers have already passed the readiness check. So, there are two checks the liveness check, and the readiness check, the liveness check basically verifies something is there, the readiness check verifies that the something that is there is actually ready to do the work. So, for both to be considered fully functional, you want to say like this, both will have the same. So, both the count of expected containers and the count of ready containers will be the same. Ah, there you go. Fantastic. So, I’m just gonna run git bots one more time, without the dash and verify everything because we need like this, it’s a bit cumbersome. So to do a quick verification, I just ran it like that. And we can do the control kit all to see what we have been deployed so far. So we have the pots, sort of have the boards, we have the services, this is what’s exposed from our pots. You can see there are no external IPs, this is all intended to be accessed from within your Kubernetes cluster. So you know that there is no such thing as an external p right now, you could again configure one in the CR Yamo. And we have the deployment which is the operator and the replica set for the operator and the stateful set which are two stateful sets one for the proxies and one for the databases. So now the time of the truth. You want to list your clusters and do this and there you go. It’s kind of you all it sorry, it’s telling us the name of the classroom is lesser one suspected. And let’s see if I can log into my MySQL from this guy you know what’s with you? All right. So I’m just trying to run out there you go. What let me read this again there is an initializer okay that need to see something see if I do this or not like that I want to do dash interactive login control get both and we lose control believe for a client as always guys you can see I don’t truly Alright here I am and now I should be able to go ahead and reach my cluster with my sequel dash H and again here is why getting the services is useful I have service that is exposing 3306 So that’s what I want to reach and I should be able to do you root there should be and we got our secrets root password I think it is I hope it’s password is work OCE oh mighty fine well to monitor yeah the service is a glitch a proxy okay this should work let me troubleshoot this so let me try this one and see the pictures okay I can do something with control except on the XE seller in Vash with me all right yes, yes of course, and I will do yes of course, I’m not attached to the container but I should be able to reach my significant right ah second attach to Container yeah container see free go I have my MySQL. What is wrong with you? So, I wonder like, I’m sure the operator use it to leave passwords as given in the secrets. So I wonder why he got like, why he got like that, like need to double-check any way you have regular at this point a regular cluster. And we can see it’s a working cluster with three nodes and it’s all primary and all in good shape. And finally, the next step is if you want to configure something and you don’t want to do that CR YAML which I’m going to confess it is not my favorite way of doing things. You can go ahead and create yourself oh, I deleted that in March. You can create yourself a regular configuration file. I’m going to do something like it was my gig Thanks and thanks. Okay, like simple changes, you can create a config map with the control sigma Aster one exceed. And I’m going to do it from file Mark CNF. And you could later do cube control I assure you can do this config map and you could do it like this and you can see the same format than we saw in the CR Yamo. So when you do it, you’re gonna end up doing it like this. What I don’t love is all the tabulation that Yamo forces you to have some people love say it I don’t matter of taste, I guess. And then if I do Ctrl Delete board, Cluster One, PC two it should take my configmap. When you delete you’re restarting actually. So it’s not like you’re truly deleting

Okay, there we go. Running feel two or three. All right, we do to do a plus b shall global variables like literally work, we should see the size. And yep, that’s it. 256 Max. And if we do excellent bucket. And I handle topics. And now we will go to one of the other pizza notes. We do is an oops it applied to everyone really wants to the same apply to everyone. It’s tricky. It’s actually applying it automatically to everybody. That’s why it actually okay. I thought I had to do the restart, obviously, the latest operator like I might have missed one operator release notes. So I apologize for the confusion. It is but yeah, so it actually got applied because the operator itself is taking care of applying the new config map. Every node is the same. So automatically done. And that’s very much what I have for you today. So I can just I think the biggest point here is don’t try to deploy To exe or MySQL manually on Kubernetes, it will be really, really overwhelming and unlikely to ever get the same result twice.

Dave Stokes: That Kubernetes use it all the way.

Marcos Albe: Yeah, yes, yes, yes. You know, like, if you’re gonna go for the automation, do the whole automation, right? Like, don’t start with manual steps and stuff like that is you know, you have to create the persistent volumes, the persistent volume claims, the secrets, you have to create the certificates, the roles, the role bindings, everything, right, like, you will have to create everything. And it just doesn’t make any sense to try to do your own scripts, it just is it’s not so simple. I’ve learned that by working with the Kubernetes team, our operators’ team, right. And like, I’ve been learning Kubernetes, I was a database specialist until not so long ago. Now I try to be whatever. But yeah, you know, if you’re a newcomer to the Kubernetes world, do try to learn how to write your own operators, if you would like to, you know, write your own, but otherwise use someone else’s operators, and rely on existing things that are well tested that have a development process behind that, you know, are professional development in some way. I would think it’s a good idea. And it’s what I would do, like I tell every customer like if this was my data set there, if this was my data center, if this was my database, I wouldn’t be doing it like this. And I will certainly use an operator who, I don’t see any questions.

Dave Stokes: Thank you folks who do watch this later, and a couple of folks who are watching this live. And thank you, Marcos, this has been very enlightening. I’ve gotta go spend up in something AWS, go try all this.

Marcos Albe:
Why would you pay AWS, anything come on it, this is free open source. And you can do it in your own laptop. That’s the beauty. And really, like, imagine if I had to, you know, like, I don’t know, but it’s a couple of those of us at support. So imagine every engineer coming up with oh, I need to deploy a cluster, you know, and then you go away because you have some other emergency and you start leaving clusters up and running in AWS, by the end of the month. You’re gonna make our friend Jeff Bezos richer and yourself poorer. So just don’t do that. Use use the open source and run it in your laptop.

Dave Stokes: That I will do that. Okay, sir. And we’ll be back in two weeks and have a good day, everybody.

Marcos Albe:
Have a good day, everybody. Bye. Bye. Bye. ∎

Speakers

Marcos Albe

Principal Support Engineer, Percona

After 12 years working as a developer for local and remote firms, Marcos decided to pursuit true love and become full time DBA, so he has been doing MySQL Support at Percona for the past 8+ years, providing lead web properties with advise on anything-MySQL and in-depth system performance analysis.

See all talks by Marcos Albe »

Dave Stokes

Technology Evangelist at Percona

Dave joined Percona last February 2022. Dave is MySQL Community Manager and the author of MySQL & JSON - A Practical Programming Guide

See all talks by Dave Stokes »

✎ Edit this page on GitHub