Automation & Operations in a Hybrid Cloud world

Automation & Operations in a Hybrid Cloud world

Blog about the Software Defined Datacenter, IT Automation, DevOps and other stuff

Articles with tag : sddc

12/27/2017 From Overblog

Life and Death of an application part 5: Agility, CaaS & SRE

In the previous article, we showed How the Tito application code were created How the Tito application blueprint describing Tito architecture were created How the Sofware Defined Datacenter (SDDC) would consume this blueprint, deploy Tito and monitor it How Tito would run in containers Now, let's move on and bring even more agility to Tito. I want to cover in this article, 3 different topics: How to create an architecture mixing containers and virtual machine How to leverage all the greatness from Kubernetes to manage containers lifecycle And, finally, we'll talk a little how the monitoring discipline is shifting toward adopting a devops model. Hybrid application: mixing containers and virtual machines Applications components have different needs according to their characteristics and it's...

Learn more

02/13/2017 From Overblog

A Self Healing application demonstration with vRealize Automation, vRealize Orchestrator and vRealize Operation

My colleague Alexandre Hugla did a great job of showing an example of self healing app using a combination of vRealize Operatons (detection) & vRealize Orchestrator (Remediation). Here is his video Démonstration proposée par Alexandre Hugla, Consultant Avant-Vente, spécialiste des solutions de Cloud et de DevOps chez VMware centrée sur plusieurs solutions VMware pour offrir le meilleur de l'administration applicative : Le Self Healing.

Learn more

12/30/2016 From Overblog

Life and Death of an application Part 3: From Build to RUN

In the previous articles of this serie "Life and Death of an application" we oversaw how to develop easily a web app plugged into a Big Data service (Google Directions). Then, we saw how to design the architecture that will run it by leveraging the different artifacts (code, conf script) and create a "converged blueprint" to merge these Dev artifacts with Ops services (servers, storage, network and security services). Now, come the Fun part, we are going to request the service, see which actions we can make once it's deployed and, then, see how Ops can actually help us make sure the application keep RUNNING in the long term. Ready? Go! I want my application to be built! Yes Sir. No problem Sir. Here you go. Simply pick up your application in the catalog: then you have a quick form asking you...

Learn more

12/27/2016 From Overblog

Life and Death of an application Part2: First steps in the Software Defined Datacenter

In Part 1 we saw how was created the application Tito and we know what the application architecture is. Basically, we have the following diagram which showcase what kind of infrastructure the application need to properly run in the Datacenter. So how do we get this infrastructure? Which is another way to say: Where do I run my code? Architecture Tito The current method In most Enterprises today, the current method is to create a ticket, send it to a specific IT team, then a number of unknown people will work on it, then you have back and forth exchange with some of them, then you receive many weeks later a virtual machine, then you install all the necessary application components, then you install your code. In average it takes between 2 weeks to 4 months. There's so many manual steps that...

Learn more

12/13/2016 From Overblog

Life and Death of an application - Part 1: the Birth

The whole applications lifecycle is currently going through massive changes. From creating applications to launching applications, running new applications to transforming existing applications, there's now so many ways to do it, so many choices.... that the hardest part is to understand the benefits & risks of each roads in front of us. I like to think of this as a "good" problem. It's better having many choices than none. A common theme among all these new roads is that they all allow to be faster. Faster to develop, faster to deploy.. and also faster to fail. Looking few years back, applications were much more static than today. They would be developed through rigid methodologies to attain a high degree of completeness as soon as possible. A reason for this was that the agility level was...

Learn more

01/11/2016 From Overblog

Monitoring vRealize Automation by simulating machines deployments

A while ago, a customer faced an issue with vRealize Automation (VRA) due to the underlying infrastructure. The impact was that no users could deploy any new machines. The deployment would miserably fail. Since this customer were not using vRealize Operations (vR Ops) to monitor vRealize Automation I adviced him to use this as well as use Log Insight to monitor the logs. But the fact is that this approach is primarily to monitor the insight of vRealize Automation, not the vRealize Automation service itself. In many situation, monitoring the service from a user point of view is the best metric one could have. So, I also adviced him to create a workflow that would trigger a "probe" deployment in vRA and send the result to vR Ops. Since I liked that idea, I decided to actually do the workflow...

Learn more