Automation & Operations in a Hybrid Cloud world

Automation & Operations in a Hybrid Cloud world

Blog about the Software Defined Datacenter, IT Automation, DevOps and other stuff

Life and Death of an application Part2: First steps in the Software Defined Datacenter

In Part 1 we saw how was created the application Tito and we know what the application architecture is.
Basically, we have the following diagram which showcase what kind of infrastructure the application need to properly run in the Datacenter. So how do we get this infrastructure? Which is another way to say: Where do I run my code?
Architecture Tito

Architecture Tito

The current method
In most Enterprises today, the current method is to create a ticket, send it to a specific IT team, then  a number of unknown people will work on it, then you have back and forth exchange with some of them, then you receive many weeks later a virtual machine, then you install all the necessary application components, then you install your code. In average it takes between 2 weeks to 4 months.  There's so many manual steps that you need to double check everything and may go to another round to request the proper Library version or proper network port opening.
But you're not done yet because after using your application you're going to need to do a "Change". For example, you may want to add more CPU, more memory, reboot, make a snapshot, add a new instance, delete everything and start from scratch. To do this, same method apply: you open a ticket, some people are going to work on it at some point, etc... etc... Again, it takes too long!
 
Waiiiiiittttt.......

Waiiiiiittttt.......

The Software Defined Datacenter  method
Well, since the Datacenter is now fully software, why not leverage it in a brand new way and actually draw your architecture on a whiteboard (Design phase) and then send this drawing (the blueprint) to the Software Defined Datacenter (SDDC)? Sounds cool? Let's see how we can do this and then we'll go through the different parts.
By the way, the product showcased here is vRealize Automation (VRA). Underneath, we are using vSphere, NSX and vSAN in a hyperconverged architecture.
I hope you liked the music :)
As you saw, we can build from scratch an application architecture which include virtual machine, network (Layer 2), Load Balancer, VM level security, application component, application code.
What was not shown in the video was that behind the scene, all deployment from this blueprint will automatically integrate into the DNS and a monitoring solution to constantly monitor the performance, capacity planning and retrieve the logs.
 
To slow motion this video, let me list what was achieved to get Tito designed:
A/ We picked up the right OS (here Centos)
B/ We stated on which network the application will run
C/ we applied security rule for the web and the SQL Server
D/ we added a Load Balancer.
 
We could have stopped here since it's already a good multi tier platform that can host a lot of applications (t would be a generic multi tier architecture) but since this serie of articles is about the application Tito, we go further and deploy also the Applicaton components: the web server with the Tito code and the MySql server with the Database configuration.
These application components were configured upstream so we'll dive into them now.
 
How to create your application components?
 
To answer this question, we need to ask another question: What does Tito need?
It needs a web server -> We will use Apache
It needs a database server -> We will use MySQL
The DB server need to be configured with a proper DB and a table structure => We'll use a sql script
It need the Tito code (Few files (PHP, CSS, etc...) and few directories) => We'll download them directly from Github
 
Each of this step can be done in many ways, you can use generic script for this different steps, or you can use Configuration Manager (such as ansible, chef, Puppet) or your own in-house solution. Here, we are using generic script.
TIPS to create these scripts: Deploy the VM you need, develop your script inside, test them and, then, push them into vRA.
 
vRealize Automation provide an integrated lifecycle management so each application component automatically inherit of steps such as install, configure, start, retire as well as dynamic properties management. What it means is that, for example, tito Front End will benefits from this integrated lifecycle to start the appropriate sequence at the right time and will push the right value at the right place.
For example, if we want a user, during the request, to set his own DB password, then when the DB is configured we will use the value provided by the requester instead of a hard coded one. Pretty handy.
Another example: let's say I want the requester to actually pick up himself the version of the code he wants to run? The dynamic property make this very easy.
 
There's another situation where this dynamic property management is essential....
 
How do my service discover each other?
 
When we design our blueprint, we are on top of a lot of technologies (Compute, Network, storage, application components, code, etc...). Because, we "see" so many things we can actually tie them together and exchange properties between each other. In vRA, it is very easy to push the properties from one component to another component.
Why is it really cool?
 
Let's say I want to pass the IP information of the SQL server to the web server, then I just need to "bind" this property and it will be dynamically set for each deployment.
 
But, wait a minute on this example, while it's very important that I can pass along  this information to my various component, how does my code know how to consume it?
In my code (form_result.php), I initialize a DB connection where I need the DB IP. vRA provide the mechanism to pass this information to the Front End but how do I get his information up to the code?
 
Coming from Ops, I didn't really know how to do this so I had to do a little bit of research. At the end it happened to be very easy.
 
In a nutshell, we need to make the property "Tito-SQL" (containing the IP of the SQL server) available to the PHP code. vRA provide us the value of "Tito-SQL" which we are going to push into the Apache2 conf file to make it an environment variable that the PHP code will be able to consume.
Quick step by step:
modify apache2 conf file (/etc/apache2/mods-enabled/env.conf) with the following code:
 
<IfModule env_module>
    SetEnv TITO-SQL "$TITO-SQL"
</IfModule>
 
Then, the PHP code can consume the TITO-SQL variable with the following code:
 
$ip = getenv('TITO-SQL');
 
There are certainly other ways to do this but this one worked fine for me.
 
So we have seen how to design an infrastructure that will run the Tito code by leveraging compute, network and security and application components. The beauty of this is that you can then consume this blueprint as many time as you need, no need for a ticket, no need to wait weeks or months, no manual error. Did I say that what it works in your private Cloud as well as in your public cloud (AWS, IBM Softlayer, OVH, vCloud Air, Atos, etc...) in our next article we'll see how to request the blueprint, track how it get build, manage it, design other type of architecture for Tito and.... there's more coming!!!!
 
BTW, you can export the created blueprint in yaml. You can check the one for Tito in my github
Summary of the tito architecture we build

Summary of the tito architecture we build

Share this post

Comment on this post