PostgreSQL running on Kubernetes operations - Percona Community PostgreSQL Live Stream & Chat - Aug, 11th

Percona Community PostgreSQL Live Stream & Chat - August 11th

Percona Community Live Stream will share a demo and some best practices on running Postgres on Kubernetes Operations. This is a part of a bi-weekly regular meetup to share experience and improve database opensource skills with Dave and Charly. Come up and ask questions.

Video

Transcript

Dave Stokes:
Good morning, good afternoon. Good evening. Good wherever you are. I’m Dave Stokes. And with me today is Charly Batista. Forgive meΠ± I’m babysitting the neighbor’s dogs, my dogs tend to be a little bit bigger. And by the way, if they give you any advice on databases don’t believe them. They’re good ones SQL Server, but nothing about Postgres or MySQL or today, Charly is gonna explain to us how Percona runs on Kubernetes.

Charly Batista:
I see you have been busy these days, right? So we may get some advice for them. I can’t hear you.

Dave Stokes:
We’re not gonna have a simple query. So you’re actually more? We’re spreadsheet-type dogs.

Charly Batista:
Yeah, they look сгСу. They look, they like attention. Right? So. Okay, yeah. Today, we’re gonna talk about Kubernetes? And how we run database and Kubernetes. Right. So last time. the talk was a bit more technical actually trying to understand how those things work underneath. The talk today is not supposed to be so technical. Because I’m not an expert at Kubernetes. Let’s put it this way. So I by no means expert at Kubernetes. We have other people inside of Percona then a lot more than I do. Right. But today, we’re going to try to understand a little bit how we run database on Kubernetes. Now that we have an overview on how things work, underneath of the container, what exactly is AI conveyed? Right? So it’s just an abstraction on the operational system of VLANs. And uses technology that’s already embedded in the kernel, right? It’s not like we have another OS running on top of the pyramid to have is just some abstraction in some ways. It’s not ritualization, as we saw last night, and this is important to keep in mind that all those containerization things, they are not ritualization. Right? So we’re just putting more abstractions to get a better isolation and control over our resources. And that’s one of the reason why running things on containers usually a lot faster than using running them on great machines. Okay, but that said, how do we run things on Kubernetes? Well, one of the first problems we need to solve is that initially, Kubernetes was not designed for applications like database that needs to keep a state, or we call it stateful applications, right? So in its beginning, Kubernetes was designed for applications that could be destroyed, and rebuilt, without keeping any context between what was before the new run of the application. Or, for example, if we have a web app, most of the web app, a lot of the information that’s on the class on the user, or for the web app, they’re kept for either on the client side, on the user side, or on the database side, not the application itself. So for a lot of applications, you can just destroy the application server and spin up a new application server. And if that is the client was connected to the old one, just send a new request. So it will send some information to capture the company contacts, for example, cookies, and all this kind of stuff that we are using on the web. The new server can get that one and ask for the database for to get some context or some idea of what was running before, right. So and those applications, don’t need to keep any contacts. They don’t need to persist information between the transactions. And this is one big difference for the database, actually, the database the main thing of the database is to keep the information right so we want our database to persist information to keep that information and we don’t want to lose everything when we restart the database as well. Crashes happen. Sometimes we need to restart the database, right? The main premise for our database is when we restart the database, or if something happens to the database, some crash happens when we start the database. Again, we still have the data. So we still need to persist all those information. And not only the data, but the database also needs to be reliable. For example, let’s say we have a crash, it’s not the maintenance, something happened, that database has a crash, and everything that was running on that database, memory here is not properly saved to the disk. When the database is stopped, it needs to be able to food to restart fully. So, roll back any transactions that were not finished and apply any finished transaction. Because for transactional databases, it has a premise, almost you commit the transaction, the transaction needs to be persisted, doesn’t matter what happens, it needs to be persistent. So this is a huge thing for the database. This is a huge promise that the database gives to us. So honestly, we finish a transaction. It’s its done. It’s there. It’s saved its own disk. And how do we actually how do we try to combine those things and make them live together in a environment like Kubernetes, which was originally designed to cannot work this way, did not care about that information, not care about if we need to restart or not an application, one of the main premises was, okay, if I need to spin up a new way just to spin up, if I need to crash something if I just come back? Right? So those are things, those were problems, that the engineers that were working people are working with Cuban ice and wanted to put a database that they start asking, Okay, how can we solve the problem? And they came up with what they call operators. So in a very simplistic way, what are operators is a whole, let’s put this way whole infrastructure or that cordage names with the Cuban ATS to work with Stateful applications, right? So there is this whole massive orchestration that it does, under the hood, to make an environment that was initially thought and designed to be stateless. And now accommodate some Stateful applications, life medical bills, right? So then the apparatus is the guy responsible for his spin up a new database, to do backups. And to keep the database the data safe on its it needs to be persisted. So this is the new guy that came before for Kubernetes to help us to organize and coordinate the holes. So we can call it the coordinator, right, the D coordinator, the keeper that works on the Cuba nets. So, we can now start working with those Stateful applications. And this is one thing that we have, right, so most of the companies, if not all, that work with the database. Now when they are moving to Cuban X, they are building their operators, they are building some operators to help coordinate in their database to be deployed on a Kubernetes ecosystem. We are going to do the same, right? We have operators for MySQL, we have operators for MongoDB. And we have operator for football scores. So and the one we’re going to talk today, of course, is the operators for courses. Let me share my screen here.

So what I’m going to do here is so this is our main point I made the first point, main starting point, when we work when we start working with the operators for football suites. So what we’re gonna do today, as I said today is not going to be so tactical, especially because I don’t have that much knowledge about Kubernetes. I mean, they cannot like to start a Cuban ad, we’re not going to set up those things. That’s not the thing. What we’re going to do here today, I’m going to use a Cuban ad from Google. So when using the JIKI Cuban ads, so I’m going to deploy the ProCon operators here. And we’ll see how easy and fast is to deploy a cluster of database using the operators, right? So that’s the main thing. And there’s a lot of things that we can cover and we go on. So it’s a healing session, when things comes to cabinets, that’s a plethora of things that we will need to learn in work and things. So sometimes things get a little messy and complicated, especially for troubleshooting. And that’s why it’s so important to understand how things work under the hood, like we did in the last talk for the containers, right. So today, the focus is just to take a look at operators to see how that power works, what we need to do to deploy a cluster, we hear one of the flight costs we felt, so one primer and two replicas. And we’ll, we’ll look around to see what the is to see that we can connect to the database, how the base may operations for the different operators, right. So this is the documentation. So if when you come here, we’ll see that we have the requirements, we have a quick start guide. This is the one we’re going to use for tonight, we’re going to use this quick start guide, because you’re going to use the Digi key. And that’s the main thing. And yeah, that’s it. So the requirements here, it will tell us basically, or what are supported by our operating hyperikon operators right at the moment, what has officially supported by the platforms, we have the GK, the Google platform, the AWS and the OpenShift content. Of course, we can also have our own cluster, to build operators, right? To build a cluster. It’s, it’s a quite tedious and planning to fail operation. I did that a couple of times. All the times that I’ve tried to build my own philosophy, I failed. And that’s why I’m using coupon apps, because I wanted to use something that works. I’ll try to build a new one from scratch, if just a closer with three or four notes. And if that works, I’ll try to replicate and record a video. So we can walk through the whole story, how we build the cluster, the Kubernetes cluster, how we install the operators and everything. But yeah, my attempt here they were the Philippines. So I wanted you to build something that works. Right. So a bit about the designer architecture here. As I said, the greatest is the coordinator things, right. So the there is this huge API on Kubernetes that’s exposed a lot of operations that can be done by the operators. And the ones that we will care today are we’re gonna deploy our operators, and we will use it to connect to Postgres. And if you have time, we might use it to do one backup and recover the backup, but it’s not.

We at least go through the documentation. So what we’re going to have on our cluster for operators to pre-cut operators, I said we have a cluster with one primary and two replicas, right. And we also have some side costs on things that help with the mundaneness of the cluster. For example, as we can see here, we have the PG bouncer for connection pooling. So we have the PG bouncer in front of our closer here to have this connection pooling. Then we also have the PG backrest to help correlate with backups. So when we deploy and receivables when we deploy our operators here, they all come together and they are already installed to weave with the operators. So sorry, my mobile, put our mute here I don’t want to stop Yeah. All of these here, and the steps are pretty easy. We are not going through all this information, you guys can go through that. So it explains what is there. So a little bit what our pause, then the two things. So inside of Kubernetes. And as the main topic today is not to explain Kubernetes, but actually to have at least a cluster installed, and just going into closer to some operations. So I’m not going to discuss all those postings here. And to make things faster, instead of just going through here, I could just copy and paste those things here. So I did it before. And I have here my scripts that we got to run tonight. So then if we go here, all the documentation, this is the documentation for GTA GT, it will tell you that you need to have an account, of course, we need to connect your account, we need to install the cube CTL things I don’t have those those those things here. So then it’s the default configuration here for each zone we’re using. I read down the default configuration. And then we start creating our classroom. So for DTE, the first thing that I need to do is any great micro answer. So these will provision the machines is in inside of the cloud computer. So when around this here, I’m telling that I want to create a cluster, right, so let’s containers, this is the Google Cloud API. So I want to create a cluster. And this is the name of the cluster, then I’m creating. So the machine type that I want to use is standard four. So in one is one of the most standard types, the standard four, will give me four CPU cores, pair, per machine per virtual machine. And if I’m not mistaken, 15 gigabytes. So I will have 15 gigabytes in each of the machines and four CPU cores. And I want to my cluster to have three nodes here, creating the physical infrastructure that’s built on this way. So Google Cloud, gonna spin up for me, three machines, each machine lives for CPU core, 15-gigabyte memory, 45-gigabyte disk space, and we will build a Kubernetes cluster for me. So copy and paste, it will take some time yet to create the cluster that will validate the classroom with everything that’s that’s needed to be created. And one thing that I wanted to talk as well is all projects that we have in recon, the operator is also an open source project. So it means that anybody can collaborate with the project. So we can go to GitHub, GitHub,, will find the Postgres operator for Percona. So everything is here, all the project is open source, we find that the same look mentation here, all the code bases. Its main developments been done in Go language. So for you all there that likes to cook, collaborative, open source project. So you are really welcome to cooperate with all of the open source projects that we have, including the operators for postfix.

A lot of people they might say I’m not a developer, I don’t know how to do those things. I don’t like to do those things or doesn’t matter the reason. So I don’t want to develop so. But if you use any if you find a bug or a problem, or if you find a feature that is missing or something, just go there, and you can file a bug. It’s a good way for collaborate. You don’t need to collaborate with only with code, filing bugs, or find feature requests or something that’s missing or something that’s not working out. As expected I suppose to be, this is a huge way of collaboration as well, and always welcomed and doesn’t matter. Like sometimes you see something that time on your perspective is really tiny. But maybe that is the tiny problem that you see there, it’s causing trouble for 1000s of other people that use them. And that tying it collaboration, sometimes you think it’s a time cooperation might be a huge improvement for an open source software. Right? in open source, there is no tiny collaborations, all collaborations are very valuable. Everything we take in consideration everything like we would take, we listen to people, because we really want to improve the software. And we believe that to improve the notes, we also improve the community by making the community stronger, and huger when the product is better. Right. So this is something that I really wanted to talk about before. But I took the standard way of waiting here for the cluster to be built. So yeah, cooperation, and love, we’re almost there. So the cluster was created. Now it’s doing a health check. So check in if the master the practice is healthy. They usually are I don’t see any fault. But we help. Listen, while we were still waiting for them, I expected to be a bit better. So I’m going to use here, the Percona Operators. There are many other operators for posting, not only for posts, there are operators for MySQL for mongoose be for all the databases come to my mind. Now, even for some proprietary ones, you can find operators out there. And they’re really easy to the point. But you see that we have some YAML files here. So those YAML files, they’re basically the configuration to the operators, they tell the operators what we want to do, right?

Because it was multiple is established. So but we need to do it anyway. Right. So let me call it a will be saved some time. Now that my glossary is built is done, I will get the credentials for my classroom. So it will get the credentials. And it will install in some configuration files here. We don’t need to, to see them. So now, from now on. When I tried to use this cluster here, it will use the credentials that have been generated and installed here. So I want to bind to my user, as in using the Google API. I have a user for Google so I’ve already authenticated. And now I’m binding those credentials that I have there. We’ve also those credentials that we have for our Kubernetes cluster here. So data going into the bind, we don’t need to do much okay I record so, as I was saying the configuration files, right? So everything that did when we started working on with operators on Cuba nets, our configurations will be stored in YAMLfiles here. So and this one, for example.

So this one, we will describe what we want to do here, this is the file that will create our cluster with one primary and two replicas. And here it will describe exactly what we want to do. Right so we have the versions we have the database might have the data source, and here we describe the primary okay, what is with the image. This is the Docker image that will be used to build the machine So, how many resource that we are going to use CPUs? How much memory we’re going to use the volumes disk space that are being used. So how will it be exposed if we’re going to have a fractured storage inside and how that storage would be exposed here. So all the configurations for the nodes, will be here on those files, right? For example, we have replicas, okay, how many replicas, so we have to wrap this here, there, that the Hot Standby, we have to the volumes, how they have been created. And if we have order, sidecar nodes, everything, so everything is being described on those EML files. So what we can do is, if we need more nodes, or if new changes or anything, we can just call the report just like it here. And we can go through the configurations here and change the configuration and make it as the way that we need to, to work for for our needs, right? So they’re all here. For now, I’m not going to change the configuration. So I’m going to we’re going to create a namespace, the namespace we call here, people. We could have created whatever namespace right with whatever name, but I think it’s the namespace here, the been described inside some of those YAML files. So if we change the namespace, we need to carefully change some of those files.

As you’ll see here, some of those files they use. Yeah, some of those files, they use the namespace people, right, so we need to be careful, we can change of gods, because we, we need to change them accordingly. When you’re like here, on when you go for the operators, the operator namespace, they’ve been hard coded here. So those are the things that we need to be careful. If we change, sometimes, we just need to go and change inside of the configuration files, right? For now, we don’t want to bother all those things. We’re going to everything by default. So inviting, create the namespace, and set the contracts because when we are using it, we’ll see if you pay attention for the camera I’m using this cube CTL command. So when I change the namespace, when actually create the namespace, and I want to use that namespace, I want every time to try to type here, the dash namespace and using cube. So it’s quite annoying, right? Because if I don’t do it will create the default, it will going to use the default namespace. So what I’m going to do here is just set my default namespace, the context here is only for this context, the namespace that I want to use is the Pico. So for now, I don’t need to type the dash namespace, I don’t need to tell the cube CTL whenever they’re running, I want to run on the Pico namespace, because it knows. And one thing that I always want to do is I want to run this common here against all odds. Because what are the cons, bonds are the the machines, let’s put them this way the run that Kubernetes creates every time that we create something or an instance, inside of Kubernetes it will call it a pod. It’s not the best name way to describe it. It’s very simplistic. It’s like an instance of something a virtual machine inside of Kubernetes or a container inside of Kubernetes. In a very simplistic way. So, we don’t have any as we see here, no resource phones in the Beagle namespace. And now we want to start deploying the thing. The first thing we need to deploy is the operator itself. So though the whole infrastructure whole things it needs to deploy inside the Kubernetes right, so I’m going to deploy here, this is the operators that we’re using here are going to deployed operators. And as you see here, so yeah, it started see, it’s going to create its own container inside of the cluster. And this will be the guy responsible of taking care of everything. He’s is the operator. And after the flying operator. Now we need to deploy our cluster, we need to create our cluster here, these these. Remember that one we saw, the primary, the rapid crowd, the bouncer backrest, and all this kind of stuff? Yeah, this is the guy here. So if everything goes fine, yep, we didn’t see any error message. I saw an error message here. Okay. Oh. Yeah, that’s one thing on on purpose, because we’re still waiting for this last guy here to be created. So sometimes, it takes a little while for us for the closer to create all the resources. And some resources, have dependencies. Like, for example, for me to to create the cluster using the operator, I need to wait for this operator cluster to be Craven. Alright, so theoretically, if I run it, now, it should be able to work because the one that we are waiting for the dependency that we’re waiting for, is being treated. And that’s what happened here. Now, we don’t have any error messages anymore. So now, this is when the match is done. So the first thing that has been created here is the PG backrest. So it has a chat. So inside our operators, we have a shirt, disk space, a repo that is controlled by these guys here from the PG backgrounds, all the backups that you create, they bind the full will be stored here. So we can access this machine and will do it later. And we can check the files and folders and everything. So every time that we create the backup, by default, we don’t change our configurations here. But those YAML files by default, they will be stored somewhere here inside of this chat. Right. And I should now have a primary cluster, a primary node by these guys here. It’s a primary node, and we can access them, just like we when we do for Docker, we can use the cube CTL command cube CTL as that dash t, the name of the node we want to access and run the bean bash inside. Oops.

And this should be my primary node. So if I do a SQL, I should be able to connect to my database. This is my primary node inside of the database. Right?. So and like but I already can access my primary node. Right? So we can for example, see what are database we have here, I have the database DG dB. And as we are using the PG bouncer, we should ideally not access the database directly like we did here. We should ideally access the database to the PG bouncer. And the nice thing is we have a service that’s been created to respond to PG bouncer see that I have to use the cube CTL API I have to type the whole name of the cluster here who also I mean the node articles and then the name of my node here is created by Cluster One that is the name of the book Also, and some sort of UI D. And that’s definitely not the best way to for application, right?

Because what happens if one of those nodes here they crash. So if there’s no one of those nodes here, crash Cubanas, and we will spin up a new node to replace. And obviously, this idea in the end will change. So to solve that, we have this PG bouncer service here. So that it has the name of our cluster, and PG, bouncer, dash PG bouncer in the image. So we can access our services using that name. And to give an example, what are you going to do here, I will create an audit old seat, let’s pay attention to that we have 1-234-567-8910 ports, those are the ports that we have, are going to create another port, we call this port Fiji client. And they’re going to use this image to create a fault. So I’m spinning a new container inside of my class. And I want to use is an example like, for example, that was an application connecting to the database, write an application, how would application be able to connect to the database? So as you see here, it’s been created, this is a container that’s been created the name I gave it, so and it only has the it has a PG client, the Postgres client here, right? So and I will try to connect my database. So, but I don’t have a username and password to connect to my database. How I’m connecting to my database? That’s a good question. Our operators, it’s exposes that we have to this. It exposes one service that we can get the secrets from Kubernetes itself. Remember all those things here? So when you create the closer here, the Cubanas. The parade’s itself, it creates one user ad with a password and defined for the database, and we can get that user and that password for that database. So I call it service. Of course, I need to change my namespace here. It’s on the ego space, not only the following space. Yep, the name of my cluster actually is closer one. And here we go. I have password and a username here that I can use to connect to the database using that information, and but those here those are using basics default, right? So let me get first let me get better the username. Okay, this is the way to do user. And it’s correct here. And the password can be saved for the password. Here I have my password for replacing here, this is the password. So I now can use this information from inside here and connect to my data is that going to fail because the name is wrong? I need to use the closer one. This is the name of my classroom teacher. Right? So okay, if everything goes well, I should be able to connect to my database. But now I need to connect into my database using PG bouncer. I’m not connecting to the database directly anymore. And how can we validate this information? So now we’re connecting right to the database. And if I go to SQL, I can create a new user, I’m going to create a new user, Charly’ Boss for my boss. Give it the superuser password. 123 - very secure password. So I have this user, I can bash you Charly. Of course, these should be dice age. Okay, I can connect. And I can also this is because I don’t have an entry for the local branch. So I need to use the TCP IP. And I can also use 127. Right? So because I’m inside 270, inside of this very same box, so I can come up here, this is the primary.

However, if I do the same, yes, well. I’m not I won’t be able to use the user trolley, even though I don’t have an entry on the bouncer. I don’t have any user trolley defined in my PG bouncer here. So it won’t let me connect I need to go to my PG bouncer and change the configuration. And this is a good thing. So we should not allow direct connections to our database. All the connections here should be directed should be through the PG bouncer. And of course, the users. On the PG bouncer, one thing that we need to pay close attention is this user here, this user has really limited privileges. So let me see if we can create the user this user. No, we’re not able to repeat the data because, as I said, this user here, he has some limited privileges. And by default, it’s the outer of this database here. Not the other way. It has to be right at least table. It has to watch to change this database schema here. Yeah, here we have a fully cluster installed. We can access our cluster using PG bouncer. This guy here, this name here is accessible inside of this, this, this cluster, the Kubernetes cluster, it has its own DNS provider. So all these names and everything here, it’s accessible here. So our application running on the same cluster, they will be able to access the database using this main here. And the good thing is, if any of those nodes, they crash, and we have a field here, if any of those be bouncer nodes, we have three nodes here for PgBouncer. If any of them they crash. So the post today, the Kubernetes will just spin up a new node. And we will to do all the internal configuration and properly change the DNS and whatever. So the high availability for the PG bouncer is done by the operators inside of the Cuban ants as well. We don’t need to care about that. We don’t need to be concerned about that because it will be taken care of by Cuba. That’s the same way for the database. So the database, if we have on crash of the database, we will also be taking care of the Kubernetes. Right, all those things giving up by the by the Kubernetes. And if we go back to the recommendation, we’ll see here on the documentation that we have some management guide here, and we have backup restore, we have horizontally scaling, monitoring using PMM. So all those things that can be done here, they deserve the ohm, on call. But, like, when we go for backup, remember that I said that we have a shared purpose? Well, it’s a volume that’s been created here. And the insurance, even if these guys crashes, that all it will proceed this a persistent volume, so we don’t lose the database thankful for the database. So the database, they have a persistent volume, and a volume group has been created. So all those inside of those volume groups, we all have those, they are described on both YAML files. So we can go there and check and change. We can do watchings here. But a nice thing is for backup, for example, we can use s3 buckets. So not only Amazon or any s3 compatible storage, they’re the ones on Google on Azure. Or even the ones that we built by ourselves, right? And we can, they can, PG backrest can use them to do the backups to do all those things. Right. And it’s easy, mostly that we need to do is we need to have the bucket. And then we do a configuration file, we have a configuration file a YAML file that we put the secrets there, we put the keys, the metadata all here, and then we just do a cube CTL apply. And that’s it. So this one will add the configuration to our operator. And we just apply that apply, and we have a backup. So the operations, the amazing thing about operators is that bulls operations like backup operations, recover operations, even that high availability thing, a lot of those operations, they can be simplified, using Kubernetes. When I mean using operators on Kubernetes. So this is one thing that is really appealing. A lot of companies, a lot of people there, they start moving P or at least planning to use Kubernetes for database, because it can make life a lot easier for the operations. For the other hand, things are not so easy when we need to troubleshoot, because now we have a lot more layers on top of what we usually do. Right? For example, in this cluster here. Remember, I built the cluster with three machines. So what exactly is my database on here? We have no control over that. Like, there are some configurations. We can pin on thing here and there. But then we are just stuff limiting the Cuban accent operators and how they are going to work if it’s not to put too many constraints here. Right? So the troubleshooting, they sometimes start being a lot more complex when we are using those tools that we built. Even though the operators they’ve been for us for some years now. They still sort of newish things. So we still need a lot of tooling to help troubleshoot and things. And those tool sets they build tools like PMM helps a lot to investigate. And we’ve been building more tools to look inside of Kubernetes as well from the PMM perspective and also from the database perspective. It helps a lot, but still a complex thing that we definitely need to get hands-on. And it’s not as simple as just deploying things. The troubleshooting process, it’s a lot more complex. Yeah, those are the things that I wanted to show you guys today. I see there may have some questions

Dave Stokes:
We had to first was from attache I’m sorry if I’m butchering. And he also asked about his advisable use Postgres databases on Kubernetes, can security be compromised? Specifically, when using Cloud provided? Clusters? You’re gonna make it and more on using the cloud providers.

Charly Batista:
Yeah, like, exactly when you talk about about security. And if you do not trust the provider, so it’s not saying right, you don’t trust them? Right. So if you’re moving to the cloud, it doesn’t matter if you cube minutes, or if more and more virtual machines. So it’s uncalled, right? If you don’t trust the cloud. I think like, the technology you’re using, it’s it doesn’t matter, unless you use for encryption. And even using for encryption, one thing that people need to keep in mind is the key it needs to be transferred somewhere at some point to the bots and we’d be kept in memory. So if you don’t trust your cloud, they can, for example, have a memory dump and get your evidence. So and that’s the biggest thing. So is it secure? How much do you trust your provider?

Dave Stokes:
Someone who used to run in secure and better classify things with their security is more using the cloud folks.

Unknown Speaker
Yeah, yeah.

Dave Stokes:
If you’re trying to use drive for Postgres, you got to make sure the speeds there and unfortunately, forget the network. storage devices are not fast enough to keep up with you. The latency network will tell you what device but that’s it for today, and the dogs. And if you have anything you want to see Charly covering a future episode, I’m at Stoker on Twitter, hunt us down and let us know. And with that, we’d like to thank you for tuning in today. And thanks again to Charly for sharing his wonderful expertise. And if you haven’t tried Kubernetes yet, give it a try. I’ve been playing with AWS and it’s still not easy. Still not simple, but it’s getting better. So

Charly Batista:
Yeah, and as long as you don’t you don’t need to build a Kubernetes cluster yourself. Yeah, that’s that’s true. Because I’ve been having nightmares of those Muay Thai like a couple of times and failed miserably to build my own class.

Dave Stokes:
Thank you, folks, and we’ll see you hopefully in two weeks with our next broadcast, and keep safe. Talk to you. Thanks. ∎

Speakers

Charly Batista

Percona, PostgreSQL Tech Lead

Charly Batista is currently PostgreSQL Tech Lead at Percona. Possesses over twelve (12) years of experience in various areas of IT including Database Administration, Data Analysis, Systems Analysis and Software Development. Strong analytical skills combined with experience in object oriented programming techniques. Technical Leader for more than four (4) years for the Developer Team. Born in Brazil and now living in Shanghai-China.

See all talks by Charly Batista »

Dave Stokes

Technology Evangelist at Percona

Dave joined Percona last February 2022. Dave is MySQL Community Manager and the author of MySQL & JSON - A Practical Programming Guide

See all talks by Dave Stokes »

✎ Edit this page on GitHub