Life and Death of an application part 5: Agility, CaaS & SRE
In the previous article, we showed
Now, let's move on and bring even more agility to Tito.
I want to cover in this article, 3 different topics:
- How to create an architecture mixing containers and virtual machine
- How to leverage all the greatness from Kubernetes to manage containers lifecycle
- And, finally, we'll talk a little how the monitoring discipline is shifting toward adopting a devops model.
Hybrid application: mixing containers and virtual machines
Applications components have different needs according to their characteristics and it's good to have the choice to provide the best runtime platform according to these characteristics. Sometimes containers will be great, sometimes it will be virtual machines.
Having the choice is always good and it's exactly what vRealize Automation (vRA) offers.
vRA has a great designer capability which can be used to assemble different technologies, virtual machine is one of them but it's not the only one.
Here we have built Tito with front End as containers and the database running in a VM.
A powerful mechanism that make this possible is that vRealize Automation has a binding mechanism to actually update the containers with the "on the fly" provided IP address for the Database.
Once deployed, vRealize Automation will take care of deploying the virtual machine, install all the software components, set the security level, deploy the containers, pass on the right information for them to connect to the Database. vRealize Operations will then take care of the monitoring side.
Containers Lifecycle Mgmt= Kubernetes (K8) is the answer
2017 was the year where Kubernetes became the de facto caas (Container as a Service) standard to manage containers lifecycle management.
Amongst the many goodies that K8 bring, I think those ones are particularly awesomes:
- Runtime platform for containers taking care of high availability, resources mgmt, placement policies, load balancer integration, rolling update, ...
- Level of abstraction to consume any type of containers (Docker, Rocket, cri-o, etc...)
- Standardized API to consume amongst any cloud (private, Azure, AWS, GKE, etc...)
- Sophisticated Policy management to control your application behavior (replication controllers, services, ...)
- Infrastructure as a code fully oriented: Your application infrastructure become a set of yaml (or JSON) files that K8 will consume.
- Managed by the Cloud Native Computing Foundation
If you are new to Kubernetes, I recommend reading this great post from Hany Michael describing what Kubernetes is for VMware users.
From Tito point of view, the goodies mentionned above make K8 very attractive.
To run Tito in K8, we need to properly "declare" how Tito need to run.
In the previous article, we created a docker compose file which describe how tito would run. Unfortunately, this format is not compatible with Kubernetes so we need to do this again. I was told about tools that would allow Docker compose file portability but I haven't tested it. If you want to go down this route, one of them is named Kompose.
Kubernetes has different concept such as pods, replication controllers, services that we are going to use to properly run Tito. All the files describing this are available here.
Let's look into one to explain a little bit how it works.
Tito front end is made of a container running apache and Tito code. This container need to be available on port 80 and we want constantly running a minimum of 2 containers to handle the load and provide high availability. Also, we want to label all the K8 object involved. For example, we want Tito front end to be labeled as being in the "Dev" "stage", running the "Tito" "app" and being in "version" "1". What would that give me? For example, I will be able to update only my Tito containers running with a Dev stage label but not update other containers running with a Prod stage label. Handy right?
Here is Tito front end replication controller file:
To have Tito running in your kubernetes, simply clone the github repo, go the K8 directory, update the tito-fe-ing.yml file to reflect your setup and then type
kubectl create -f .
Then connect to your ingress URL and voila
So what are the benefits of running Tito into K8?
- Tito lifecycle mgmt is taken care of. For example tito front end is highly available since K8 will take care of restarting the front end if needed.
- Tito update is a snap. It's now possible to run the K8 command rolling-update to update tito in a very nice manner. K8 will take care of decomissing the containers with the old version and replace them gracefully with the ones with the new code version.
- Tito is fully as a code. My code and all the infrastructure I need is hosted in github and inherit all the version control goodies.
Monitoring + DevOps = SRE?
Monitoring is a big topic which I happen to like very much since I have been using for many years vRealize Operations and Log Insight. What's happening right now is very interesting because the monitoring is impacted by the Devops transformation. Giving more freedom to the developpers does not end at delivering infrastructure more rapidly, it's much more than that. It's also about giving them freedom to manage the monitoring of their apps in a agile manner which include the monitoring setting as part of the continuous deployement process. What does that mean practically from a monitoring perspective?
- Monitoring need to be ready to consume any type of metrics => freedom for developpers who are the one deciding which metrics are best
- Monitoring as a code is needed => easy to integrate in a continuous deployment process
- Monitoring need to provide high resolution metric capability to showcase immediate change impact => huge impact on the platform
So how are we going to adress this without deploying yet another highly scalabale platform costing a lot of money to host and install? Introducing Wavefront, a SaaS based solution provided by VMware as part of his new range of Cloud services (understand SaaS).
Wavefront was bought by VMware at the beginning of 2017 and it's a great move! It offers metrics as a service analytics with real time response in a hyper performance manner.
You can send any type of metrics to wavefront (from sources such as the application, application components, operating system, virtual layer, network switches, etc... etc...) on a per second level (<- this is huge). Wavefront will not even glimpse if you send him millions of datapoints per seconds and will keep the history forever! It's immediately available, compatible with a big range of technologies since it consumes public cloud APIs, rely on open source agents and is compatible with market standards.
As an example, let's see how Wavefront can help us monitoring Tito which now run into Kubernetes.
The plumbing goes as follow:
containers metrics are collected via Docker cadvisor mechanism, which is centralized by K8 via heapster, which then is collected by the wavefront proxy and sent into Wavefornt. Sounds complicated? It's not. It take 15mn to setup and all you need to do is start 3 containers.
Once done, wait 2 seconds (yes 2 seconds) to see the data flowing into the wavefront Kubernetes dashboard. Here are few screenshots showing that Wavefront display the cluster, nodes, namespaces, pod, pod containers metrics.
Wavefront is very good at ingesting and displaying information but also at manipulating them. You can use Excel type of queries to filter and correlate the data. For example, I just added a filter to display only Tito related containers which allow me to create this Tito dashboard:
Since Wavefront is natively DevOps oriented it's possible to manipulate all the dashboard and data as a code. It integrate a version control system and it's also possible to host the code in your version control system. As an example, my Tito dashboard is hosted in my github repo.
Site Reliability Engineer (SRE) is the new trendy word to describe how software engineering and Operations are merging. Part of a SRE job is to provide solutions that fit nicely into a highly automated environment, help reaching continuous deployment and reduce the gap between Dev and Ops. This is exactly what Wavefront is about. To go further on this topic, I suggest you read this great blog post showing an example of how wavefront solved a fictional business issue in the finance industry.
The story does not end here. We talked about all the goodies of K8 but didn't look into what the underlying can offer to K8. K8 is as highly available as the underlying infrastructure and as secure as the underlying infrastructure allow him to be. What is the right underlying infrastructure for K8? I suggesst you read what VMware, Pivotal and Google are doing to answer this here.