Estimated reading time: 9 minutes

This post was originally a vlog post from October of 2020. For reference, we post our vlogs here with the full transcript below along with clarifying comments and graphics. You can check out all our vlogs on our YouTube channel.

This week we tackle the question, “Is Virtualization Needed for Cloud?” which is pretty interesting and of course requires several long-winded corporate stories. If you don’t see the embedded video above, here’s a direct link.

Is Virtualization Needed for Cloud Computing?

Today we want to answer the question, Is Virtualization Necessary for Cloud Computing? Hopefully, by the end of this short video, I’ll be able to outline what Cloud Computing is and if it’s necessary for cloud computing. Just a quick caveat, this video is going to exclude virtualization as it applies to microservices and networks.

I can’t tell tell you the answer to the question Is Virtualization Needed for Cloud Computing until I recount some sordid tale of my corporate past. When I first got into the server business a long time ago, enterprises shied away from putting 3 tier architecture components on the same server. Your 3-tier architecture had web, app and database. These were almost always 3 different machines for reliability and better intercommunication. When apps were being developed or tested, you would have possibly seen them all on 1 machine.


As we’ve discussed in the past, in 3 tier architecture, Web servers only presented data, application servers ran the business logic, and database servers stored the data. Also, these servers were also inherently redundant so they were costly. Once you had them configured, they’d sit there, idling. So you had 3 servers that were at least $2-$5k apiece, sitting there idling. If you measured the overall usage of these machines, it wouldn’t be unusual for them to be at 1-10% utilization. If your marketing department thought that a million visitors were going to come to your website, you tended to do something called building to peak which was building the scale of your environment to accommodate the worst-case scenario possible on the high side. So if you thought you were going to have a million visitors, you’d have to build for 1 million visitors even though only 50,000 visitors may have made it to the site. This was super costly but there were early startups that got media attention (usually TechCrunch) and the site would literally crash which was a bit embarrassing for a startup especially one in tech.

So, to answer the question, “is virtualization necessary for cloud computing”, we need to wait a bit longer. I’d like to do a quick recap on what virtualization is. Virtualization allows you to put multiple logical servers on a single a server. So you have this large server, you install a hypervisor (B-ROLL – definition) and then divide the resources of that ONE server into small chunks which operate independently of each other logically. This way you get the specialization that enterprise computing demands, without having to allocate a single server to each discrete function.

Why Normalization Is Good

Here’s another problem that virtualization solved which is probably the most amazing benefit we get from virtualization. So strap for another sordid corporate story!


When I worked in the managed hosting business back in the late ’90s, we would manage systems for clients. Let’s say you had a very specific HP server sitting in a cabinet and you had an identical cold spare sitting next to it. Sometimes, the running server would fail – so you’d take the hard drives out of the failing machine and put them into the cold spare and you’d turn it on. In some cases, that computer may not turn on and start working ALTHOUGH in theory it really should have. There were a bunch of small nuances that would have caused this cold spare not to work. For instance, this could have been due to a difference in firmware between the servers, or some slight change in manufacturing versions – even those the model numbers were identical. It may have been a patch that was applied to one machine but never initialized because the server hadn’t been rebooted in months. It could have been something else so you were never guaranteed an easy recovery. In the end, this meant that you had to image a new machine from scratch, carefully reinstall all the applications, then copy the data off those hard drives and copy it back over to the news servers, test, and then run. So at the most basic level, what does virtualization does for us It allows you to have those single-purpose servers, still separated in every regard but sitting on the same piece of hardware. This allows you to almost fully consume the resources of each server you put into production with very little idling but if you are to take anything away from this video, it’s the resolution to the story I just told — Virtualization allows us to “normalize” CPU, RAM and storage resources. Normalization is the idea that types and origins no longer matter. CPU, RAM, Storage Space, and Network are made common. Where they come from, what brand of hardware no longer matters. In other words, the origin of those resources isn’t taken into account – they just need to be there.

Virtualization allowed the modern cloud

Where Virtualization Helps

Here is why (finally) virtualization is needed for cloud computing. When we built systems for clients before, we’d allocate specific servers to them. The fact that those servers were used 1% or if they were perpetually clobbered 24 hours a day it didn’t matter. Those servers cost the service provider these things: the full capital cost of the server, network, software licensing, monitoring, management, space, power, cooling. This server also could only be used by 1 client. Putting someone else on this server would be impossible and there was no easy way to logically separate this single resource amongst clients. (There was something called shared hosting, but it wasn’t something most enterprises would tolerate unless it was a simple brochure type of website). This is akin to renting an apartment to a family and If the family goes out to dinner, you can’t rent that apartment for 3 hours while they are out to another family.

Going back to cloud computing where you build a platform full of servers and you sell access to the cumulative resources that those servers offer. So you buy a handful of servers and that could give you 10 CPUs, 10 GB RAM and 10GB of storage. You can sell this at an hourly rate and when it’s given back to you, you can have it virtually “rented” by another entity. So the buyer benefits because they can buy in bite sized chucks and the service provider benefits because they fully utilize their capitol investment of servers. Of course, they will need to build a little overhead for surges of business but the more automation they employ, the more competitive their rates can be. From an uptime perspective, since these instances are virtual, fully inclusive, and lighter you can start, stop, clone and move them around in a way not possible before.

Buy Your Compute By the Drink

This is interior of modern european pub.

So cloud computing is really predicated on the premise that you have virtualization. The virtualization doesn’t make the raw compute necessarily cheaper but it does 2 important things: 1. Buyers can buy in small chunks with little to no commitment. 2. Sellers can sell their entire capacity and scale their environment without having to match their hardware thanks to the normalization we discussed earlier. If you need 10 racks of hardware, buying that in the cloud will probably be more expensive but the fact you could slowly have scaled up to 10 racks of gear 1 CPU at a time is the real benefit here.

Last little story, that involves virtualization and VMWare. The first time I saw VMWare run was on a friends laptop. He was an engineer for a software company and his role was to take some of the applications his prospective client was using, and create a proof of concept on how his software would help them integrate these various applications. So he’d literally mock their environment up on his laptop and the load test data and demonstrate to the client how the integration would look like using their actual applications and test data.

Playing Rugby

Installing a Windows Server OS, SQL Server and several enterprise applications on a laptop is not the best idea if you need to still check your email, fill out expense reports, collaborate with your colleagues, and not have the IT run down the hall and tackle you. So he would build these proofs of concepts on a Virtual instance that was fully isolated on his laptop. Once the client saw the proof of concept, he could delete that VM instance which was literally a single file or copy it off to another device. Meanwhile, his pristine laptop imaged by IT the day he was hired was in perfect shape since all that craziness lived in its own, self-contained instance on his laptop in a single, virtual hard drive file.

I hope this video helped explain how virtualization helped the cloud become the cloud and hopefully I answered the question Is Virtualization Needed for Cloud for you. The overall idea is that you are abstracting operating system instances from hardware. To think about micro services which we covered last time, you are abstracting code from the operating system. (Define abstract) but we’ll cover that in a future video when we go into the world of DevOps. Thank you again, and I look forward to seeing when we release our next video!

We have an evolving library of Cloud Computing resources that you can use for any research you have.

I am not proud to admit that I used to put security into place to satisfy an audit. It took me time to learn that security is the foundation of any system. What I thought was security hype was really the need to increase cyber security awareness. Let me start with a story…

Early Corporate Days

I worked at a global 100 firm after having worked for a much smaller and more nimble firm for years. I think I associated security with:

  • changing my password every 3 months.
  • having no password management tools (like 1password)
  • not being allowed to check personal email
  • my removal from all internal systems each year and having to have my manager approve access to each one individually.
  • Slow VPN access for a job I traveled a lot for
  • generally slow and outdated (and ugly ?) enterprise software
hindsight and security hype

Hindsight is 2020

I realize that a single breach would have tarnished the reputation of this firm to the point of ending our business unit. This explains why they implemented every level of security possible. Perhaps if our security team better communicated to us what they faced daily, we would have been far more open to working through all these extra layers.

I am not a psychologist but here goes…

Another reason for my hesitation is that being a technical person, it would be harder to fool me with a phishing attack. This is of course unreasonable as there are scores of employees that provide essential services to an organization outside of the IT department.

Perhaps it’s human nature to resist anything that is overwhelmingly being promoted or pushed EVEN if makes totally sense. Perhaps we feel it’s an attack on our individuality and we have this desire to remain independent and unique.

SO… When security breaches went from website defacements to a profitable enterprise, I had these same feelings of security come up although we were always careful in our security implementations. For certain these steps were used as sales and marketing points in our pitches.

Cyber security awareness is not noise
Cyber security awareness is not noise

When I Realized It Wasn’t “Security Hype”

So, that’s why I thought it was noise before… and as irrational as my resistance was, here’s the set of circumstances that snapped me out of my “security hype” belief:

Security hype in a small town is needed
Not my small town but pretty close…Security hype in a small town is needed

I am not the Sheriff but I speak zoning…

I serve as a volunteer for planning commission in my home town. With this role, I have a city email address as well. I recently got an email from someone phishing who tried to convince me he was our Mayor and needed me to buy gift cards for some strange reason.

Cyber security friends tell me to expect a breach eventually even with great security. This really nullifies my original believe there is a lot of noise in the security space. The good news is building cyber security awareness is a great first step and I see it everywhere.

Security Hype

Final Words

I was put into a month long bootcamp at my first technology job. One of the most important aspects was online security. We learned that weblogs could reveal the last page you were on using HTTP REFERRER. Using that information, a poorly formed URL structure could give away critical data such as an intranet location with a clients name or a future acquisition list for the firm we worked for. We need to go back to bootcamps and periodic training if we are to protect our organizations.

If you’d like to learn more about cyber security awareness and strategy, check out our managed security services page.


“Why is Cloud So Expensive?” is a question that no one wants to ask or answer after completing the challenging task of moving workloads out of legacy platforms.

In this post, we look at one case study of a Software as a Service Provider (SaaS) that had predicable costs in private cloud and found themselves spending more in public cloud. We’ll attempt to illustrate why this was.

  • About 87% of our clients have applications that are based on 3 tier and not born on cloud architecture.
  • Our client had a strong development team but a small operations team
  • Public clouds can easily cost as much, if not more than managed private cloud.

Cost Summary Over Time

In the presentation below, you can see that my client had these costs:

  • 2015 – Private Cloud @ $17,000 (fully managed by private cloud)
  • 2017 – Public Cloud + Consultants @ $24,000 (Public IaaS + Consultants to help manage gaps in scope like a DBA)
  • 2019 – Public Cloud + Certified Managed Services Provider @ $20,000 (Public IaaS + MSP to fully manage all)

We also noticed that the ad-hoc consulting costs didn’t offer the consistency or ownership exhibited by a specialized Managed Services Provider (MSP).


A Quick Note on Economies of Scale

  • A NOC (Network Operations Center) is a costly operation, it’s better to share this resource.
  • Monitoring, patching, trouble ticketing, event correlation, and inventory management systems are best shared amongst many organizations.
  • Engineers are difficult to recruit and retain unless you are an IT centric organization.

Economies of scale are usually talked about when it comes to hardware but they can also apply to expertise and staffing. A fully functioning NOC with all levels of support is costly to build and maintain.

Cost Overview from 2015 – 2019

The chart below takes you through what my client spent in 2015, 2017 and finally in 2019.


So with this case study, we find that to answer the question, Why is cloud computing so expensive we can answer with this:

  • economies of scale apply to both hardware and personnel.
  • public cloud can be quite amazing for born on cloud code but not great for 3 tier, vertically scaling architecture.
  • public clouds are better costs for horizontally scaling environments.
  • it’s easy to turn on server instances and underutilize them.
  • bringing in consultants with a single focus can be costly.
  • consultants are great at solving point in time issues, but may not be the best in overall site ownership


When moving to public cloud, it’s important to take into account that if your systems were fully managed before, you’ll need to be fully managed after the move as well.


I’d like to talk about colocation pricing. In this brief time, I’d like to cover:

  • how data center colocation pricing works
  • why data centers vary in price
  • and real world pricing examples

We can only imagine the curiosity a IT director faces as they begin to outgrow their in house data center or realize that their new building lease has no suitable space for a good server room.

Today our goal is to put some clarity around the costs associated with colocation. From a smaller regional providers to notable name brands. If you’d like a high level overview of what colocation is, check our our article about colocation here.

chart - colocation pricing and data center components in vertical chart

Let’s Start With – the Components of Colocation Pricing

There are several components, In no particular order they are space, power, cooling, bandwidth, and cross connects. We’ll begin


There are several ways space is sold in a building based data center. I am not going to cover containers or other portable data center footprints. Here are some of the options

  1. By the U (a unit of measurement that is about 1.75 inches or 44.45 mm
  2. 1/4, 1/3, 1/2, or Full Cabinet / Rack
  3. Preconfigured Caged Spaces with Racks inside OR
  4. Preconfigured secure rooms that are almost completely isolated from the rest of the facility
common power options

Moving on to Power and Colocation Services Pricing

This comes several different ways and it can be one of the more confusing aspects of colocation pricing. When you are speaking about Racks you have:

  • 20A/120V power ~ 2.0 KW
  • 30A/120V power ~ 2.8 KW
  • 20A/208V power ~ 3.3 KW
  • 30A/208V power ~ 5 KW
  • 50A/208V power ~ 8.3 KW
  • 60A/208V power ~ 10 KW

This can go way higher, up to 60 or even 80KW per cabinet depending on the datacenter. There’s also a 3 phase option for most data centers.

Here’s how you calculate kilowatts:

Take your amperage and multiply by volts. But, I only use 80% in my calculation as you aren’t supposed to use the entire circuit as you can trip a breaker. So for example:

  • 30A x 208 = 6240 BUT

80% of 30 amps is only 24AMPS. So let’s redo the formula:

  • 24A x 208v = 4992, round up to 5000 and you have 5000 watts. To represent 5KW, you just move the decimal 3 places to the left. 5 KW.

Also, These circuits can be delivered as single circuits or as primary and redundant circuits. Redundant circuits double the capacity with the understanding that you’ll only use 1/2 since it’s meant as a failover only.


Let’s talk hyper scale for a moment…

For hyper scale sized data center use, it’s not unusual to purchase space using Kilowatts or Megawatts of power. So ordering 1 MW of data center space would make the main metric the energy needed, and the facility would literally have a spec which included the circuits, cabinets, cooling and space to accommodate that power draw. This is also referred to as wholesale colocation

Let’s Talk Cooling

Cooling is usually included as a component of power pricing unless your requirements for your power are really high. Most facilities can offer 10 KW cabinets at a standard price. Increasing this to 20 or even 60KW can incur higher cabinet pricing which is mainly to cover the cooling aspects of the service.

Bandwidth & Cross Connects

Bandwidth and cross connects show up in every data center quote. There are 2 ways I typically see this happening.

A client will establish bandwidth like this:

  • purchases 1 or 2 cross connects from the data center THEN
  • purchases bandwidth from 1 or 2 different providers for failover.


Some data centers can price out their own blended bandwidth that consists of 2 or more internet service providers. This allows the bandwidth provider to quickly deliver bandwidth services to a colocation client without the wait and without the complexity of setting up failover via BGP and it’s also easier than signing 2 contracts. One for colocation and one for bandwidth

Moving on.

Pricing and Data Center Margins

Data Center space is derived like any other product and is costed using typical accounting models for overhead and margin with some variances. Buying data center space as a provider isn’t a known quantity and there are many ways to start a data center operation such as:

  • purchasing (financially) distressed or foreclosed data centers
  • pouring concrete and building a facility from scratch
  • purchasing and retrofitting existing buildings to be data centers

The point here is the initial investment can vary widely unlike other businesses and franchises with more predicable startup costs. This investment made by the data center provider has to be reflected in their pricing to end users.

Let me talk about the

the colocation pricing dilemma

The Power vs Space Dilemma

In addition to the startup costs of building a data center, there are other pricing issues that product management have to avoid when pricing things out such AS

  • Selling all their space BUT having plenty of power capacity left over (which means selling lots of low density space was the issue) OR
  • Selling all their power BUT having plenty of space left that cannot be sold (selling lots of high density space is what causes this) OR
  • Filling up the data center BUT they have slim profits after having filled the entire site (which means space was sold at too low of a cost)

I’ve been able to witness this miscalculation first hand. It always seems to go down the same way. A major online company needed lots of space and didn’t want to build their own data center. Not being able to resist the massive influx of revenue, the data center in question sells a deal that was high in power, low in space and was sold with slimmer margins. Once the space was sold, it reduced the power capacity to the rest of the data center to a point where there was plenty of square footage but no power left to sell in that square footage. From this point forward, each new client had to have their power requirements closely calculated and each additional power circuit that was sold could only be done so after an audit of available power and cooling. Some data centers are able to add cooling and power capacity but in many cases, local utility can be limited unless there are major capital expenses thrown at the issue

Real World Pricing Examples

Cabinet or Rack Pricing

Cabinets are usually sold in 1.75 inch increments which are referred to as a “U”. Many cabinets are 42U but there are smaller and taller ones based on the facility. For some high density and space efficient designs, it’s not unusual to have much taller cabinets. We’ve seen this cabinets run from $300 to $2000 per month based on region and cooling requirements.

Power pricing graphic of $500-$2000 per month for 30A/208v power which is 5 Kilowatts

Power Pricing

Power or Energy is really the driving factor for almost all things here.

1 x 30A/208V Power – Primary + Redundant Power tends to cost between $500-$600 per month. Most retail colocation assumes that you will consume the entire amount of power allocated to you which on a 30A circuit would be about 24 amps. Wholesale pricing for power is different.

Cross Connects

There is quite a variety of pricing from a single charge of $50 one time up to $350 monthly. While this is sometimes the cause of some pain, i.e. why are you charging me so much for a piece of fiber that runs 50 feet, data centers will often justify this by saying that the cross-connect is treated as a mission-critical circuit, is included in the service level agreements, is fully monitored with the responsibility for keeping it connected and secure. For some data centers, it’s a major stream of revenue justified by the multitude of network access one is given in that particular site. Access to almost all bandwidth providers can allow an infrastructure company grow quickly as all their important partners are just a cross-connect away.

Internet colocation pricing for bandwidth sign


What I call “in-house blended bandwidth” I find varies greatly. Generally, a Gbps to the internet will run between $500 and $1000 per month depending on the carrier.

Remote Hands

I’ve seen this fall into the range of $100 to $200 per hour. Data centers like to bill in 15 or 30-minute increments meaning that if you ask for a task to be done (say rebooting a server) and it only takes 5 minutes, they will round to the nearest 15 or 30-minute mark. Some data centers will offer better pricing if you commit to remote hands service monthly. That way, the data center can gauge their supply of on-hand personnel and staff to ensure timely support if you pay for it consistently on contract.


Things Not Covered

My goal was to really outline data center pricing. There are some colocation facilities that offer the most basic space, power, cross connects, remote hands, and nothing else. Other facilities can get pretty wild and deliver so many services on top of colocation, it starts to look like a managed cloud. If you currently need to set up a data center strategy we are able to reach out to multiple colocation vendors in a single shot, pull back pricing, and present it to you in a very easy-to-read manner. This is a free service. Just look for a link below to start the process.

As always, thanks for joining us this month and I look forward to seeing you on our next video.


The text that follows is a summary of our video. This month we discuss the real technical differences between public cloud vs private cloud. If you’re interested in more details about the cloud, we are constantly adding to our What is Cloud Computing Library.

We Aren’t Discussing Public vs On Premise Cloud

On Premise vs Public Cloud

When you search for diagrams for public vs private cloud, you tend to get more of a logical diagram that shows on-premise vs public cloud which we’ve drawn above in our own diagram. That depiction, while helpful, isn’t an accurate representation of the differences between public and private clouds.

Let’s start with the basics, what is a Hypervisor?

To kick things off, we need to address what a hypervisor is so let’s define it:

It is a computer on which a hypervisor runs one or more virtual machines and it’s also called a host machine, and each virtual machine is called a guest machine.

Courtesy of Wikipedia

Also, For the ease of discussion, I am going to use terms that are relevant to the VMWare ecosystem of virtualization as it’s just more relatable for most people. Please note, there are many commercial and open source hypervisors out there.

public cloud vs private cloud logical diagram of a hypervisor picture

Let’s set the stage. When building a cloud using hypervisors, you start with a server, and you don’t install windows or linux but you install VMWare vSphere. The vSphere is the hypervisor making this machine a HOST and allows you to add a bunch of various guest operating systems. They can be almost any operating system that runs on x86 architecture. So let’s walk through public cloud and how the hypervisor is situated. I’ll start off by by building groups of hypervisor machines, In the VMWare world, this is referred to as an ESX cluster.


This cluster can absorb additional physical servers very easily which allows the resources of the new server to be allocated to the cluster as a pool of resources. The Virtual instances we use are spread amongst many servers throughout the racks and if one server goes down, the Virtual instances are spun up instantly on a different machine.

Public Cloud

Remembering that this example is for public cloud, look at how they sell VM instances. Their clients don’t really know about the infrastructure is behind the scenes. They don’t see the complexity of grouping hypervisor machines together. They just see the individual virtual machine instances that they purchase. They will typically purchase instances with some type of portal that allows them to add servers, CPU, RAM and storage. The client is only responsible for the actual VM instances and not the underlying infrastructure which is no simple feat to properly manage.

As far as billing, the clock starts when you spin up an instance, and they can be billed up to 720 hours per month. So in theory, you are mixed in with other firms on these massive ESX host farms which are logically separated. The networking between all of this is mainly software defined and the public cloud can add capacity simply by adding rows of servers and storage to keep some level of overhead above and beyond the forecasted client need.

Sample Public Cloud Offering Logical Diagram

Public Cloud in Review:

  • Massive ESX clusters
  • Instances are in a community cloud.
  • Secure but Limitations on Custom Hardware

Learn about some of the limitations of public cloud in our Disadvantages of Cloud Video

In public cloud, you don’t control the hypervisor, you are renting instances on someone else’s hypervisor.

Switching to Private Cloud…

Keeping some of the terminology in mind, A cloud provider allocates to you 3 servers and builds an ESX cluster on it for you. Remember that would be 3 servers, with hypervisors on each, clustered in a way that all the resources of these machines are pooled. Additionally, they give you access to storage and network and now you allocate your VM instances to the limit of your cluster.

Adding a 4th ESX Cluster Server to increase RAM and CPU by 25 units each

Let’s say you use 3 Servers for your cluster giving you the following capacity:

  • 100 vCPUs
  • 100 GBs RAM.

You can create 100 virtual servers each having 1 vCPU and 1 GB of RAM each. To grow, you can’t goto the service provider and ask for additional virtual machine instance (e.g. 1 CPU, 1 GB RAM), you will to add another dedicated server which is added to the ESX cluster. This gives you another bucket of resources from which you can add more VM instances with CPU and RAM.

When you grow, there’s a minimum step you will need to take, each at a substantial cost because you are buying 1 full server of compute even if you only want to add a single VM instance with 1GB of RAM and 1 vCPU.


What is Bare Metal Hosting?

With some hosting providers, you will see an offering referred to as Bare Metal. Bare Metal is where you are handed raw machines where you can add your own hypervisor layer and create your own ESX-like environment.

In this case, you are no longer relegated to just VMWare and you can look at other commercial or open source hypervisors like Linux KVM or Xen.

So in public cloud you are using a shared hypervisor layer managed by the hosting provider. In Private Cloud you are using a private hypervisor layer where it can be managed by either the service provider or the end user.

In the end, there are many exceptions to these rules. You’ll find tons of exceptions to everything I’ve said but those are the fundamentals that we’ve seen here at ColoAdvisor. In the end, it comes down to who manages the hypervisor and is it shared or dedicated.

For additional information on cloud computing, check out our What is Cloud Computing library and also check out Is Virtualization Needed for Cloud Computing. You can also reach out to us at anytime using our contact page.

Estimated reading time: 10 minutes


We repost our vlog here along with the full transcript. If your embed above isn’t working, here is the direct link to the video here.

If you’d like a bit of background on what cloud computing is, check out our What is Cloud Computing page.

Cloud Deployment Models

Search for the phrase cloud deployment models and you’ll end up getting articles that reference public, private, and hybrid clouds. In some instances, they’ll use the word Multi-cloud as well. While that’s accurate, I really wanted more depth and nuance to the topic. The Public, Private and Hybrid cloud terms aren’t prevalent in many of the discussions I have with my clients. This is a pretty vast topic so I hope to do it justice. Please note, this is certainly by no means authoritative because there are many ways you can view and categorize the various ways to deploy your systems into the cloud.

A Tangent About Horizontal vs Vertical Scaling

Before we get into the deployment models used in cloud, I’d like to start with brief primer on horizontal and vertical scaling because I’ll use these terms a lot in this video. Simply put: Vertical scaling…. Take your web, app and DB layers of the environment. Picture them on top of each other and remember there are limitations when sharing the workload at the application and database layers of the environment. Let me pick on Microsoft SQL server for a moment: Clustering a Microsoft SQL database was really about failing it over when 1 of the 2 servers went down. It wasn’t designed to share the load amongst 2 machines. Said differently, originally Microsoft SQL clusters were for uptime and not for processing more work. To scale this environment, because you were relegated to using a single database server at time, your options were to add CPU and RAM to the same single server. So that server got bigger (this way), and so we represent that as being vertical. You could not easily scale it to a second machine which is what horizontal scaling refers to. Also, Microsoft has long since addressed this issue and other database vendors struggled with this as well and so I only use it as an example. Application servers suffer from this same issue. If the logic isn’t built into the application enabling it to look for other instances to process the workload, you end up having to build massive single instances of servers. Let’s move on to horizontal scaling… imagine that you can take these original 3 tiers and you can have an array of these devices going left and right as wide as you’d like. Each server can literally be garbage because if one of them fails, they can be kicked out of the rotation and the load is split amongst the remaining working servers. We’ve done this fantastically at the presentation, or web layer forever. When you are able to do this at all 3 layers, you start moving in the direction of being elastic. { } Let’s move into what the video was really supposed to be about which is the different methods of cloud deployment we see. So I’d like to outline all the options available to us. Some housekeeping stuff: I will lump somethings in here that aren’t considered cloud at all, but they are still relevant to the discussion and so I’ll keep them in. Also, I am going from oldest to newest technology. Lastly, I am not going to dive deep, this is a super technical topic which would be hard to cover int he 10 minutes we get together each week. So, Don’t laugh, but I am going to start with



While you’d think mainframes are the opposite of cloud, in a manner of speaking it could be argued that mainframe is the original cloud but without the internet to make it universally accessible. Let me explain. You had massive horsepower in a single concentrated machine that could be partitioned to run different functions. Folks from around the company would connect to this using fairly low powered terminals and all the data resided in a single area. Take a mainframe, give access to it over the internet and it’s cloud. Just a really old, hard to maintain and expensive cloud. The Second deployment method is:

Client Server

is not really cloud but has been adapted to the cloud. The easiest way to outline client/server is to think of any desktop application you use. It’s an executable file that runs on a computer and its data is typically stored on that same computer. There’s no web, app, and database layer to it but it’s still a staple for many businesses. In the healthcare field, many applications are still client-server with software publishers trying to offer hosted versions of those applications. To “enable” it for the cloud, You basically are allowing folks to all login to a terminal server which “presents” the application screen to you, almost like streaming a video of the screen. If you’ve ever used log me in, or if you’ve gotten remote IT support where your keyboard and mouse are taken over by a technician, it’s very similar. So the terminal services are how these are deployed in the cloud but the application scaling itself is a bit tricky. To scale, the application is run on one machine and the database on another for scaling and you just get more RAM and more processors to scale. Most of those software providers I imagine are working on re-coding their application to run in the cloud using a more modern method. #3:


3 Tier Web Application

This was the happening thing when I started in IT in the 90’s. There are 3 tiers: Web, Application and Database. Users could use any web browser and they experienced The Web layer which does the presentation of the data, the application layer processed the logic and was really the brains of the application, and the database stored all the data. Many of these applications scaled vertically, meaning you’d just get bigger servers but with the exception of the web or presentation layer which allowed you to scale using load balancing. In time, applications and databases began to more easily be load balanced which allowed them to scale horizontally. This allowed the distribution of processing to leave that original app or database server for one sitting next to it. If there was a version 2 of 3 tier application architecture it would have been virtualization. Method #4 is



based architecture. Since this is new and just so different than what we are used to, it may take the longest to explain. If you are familiar with the way an API works, you can quickly grasp what microservices-based architecture is. Rather than writing a lot of code into a monolithic, single program with all the components in one place. You build containers for your application. Each container contains all the libraries needed to execute a specific aspect of the software in a very single tasked sort of way. Let me do a simple example, you build an application for your online store. You have a shipping microservice in your application, which would receive an address and a weight from the invoicing microservice and compute the shipping cost and estimated delivery times and hand it back to the invoicing microservice. Let’s say, it’s holiday time and the invoicing microservice if getting clobbered, the orchestration and scheduling app would fire up more invoicing microservices and would let the system know, hey, there’s 2 of me of now and you are free to request invoice creation from me. As this increases, the load on the shipping microservice will increase too. Scheduling and Orchestration will say, hey, we need to spin up another instance of this shipping microservice until I notify you otherwise. If the app is truly elastic, it will note the decrease in requests after this surge, and turn off those extra instances, meaning you are handing resources back to the infrastructure. If this is a public cloud, you are no longer being charged for this computing resource. The take away is you’ve broken your application into small sections, each responsible for processing very specific tasks and being available always to provide the computation that is requested of it on demand. This model is very easy to scale horizontally again meaning you can scale instances left and right of that instance with lower-powered containers and then share the compute load over those new instances. You don’t need to add massive CPU and RAM to single instances which is what we did when we scaled vertically.


Serverless Computing

is our 5th deployment method. This of serverless this way. It’s a system that just runs code. It says, give me your code, I’ll execute just your code and I’ll output it to where you want as many times as you ask me to do so. In the serverless world major players would be the likes of AWS Lambda or Googles Firebase. You can set up your code to automatically trigger from other cloud services or call it directly from any web or mobile app. Serverless really shines is if you have a process that kicks off that needs horsepower but in small intervals and the results are handed off to some other infrastructure. I think serverless is probably the trickiest of all these models to explain so I’d like to just give a real world example of how something like this could be used. Lets say you run a site where you take a rectangular profile picture and you spit it back out to end users with rounded edges which incidentally is a service I’ve used because my photoshop skills are garbage. So you would have a simple website, and folks would upload their rectangular picture to the server. To convert this, you would need some code to run a bunch of computations and convert the image. It may not be worth having an entire environment built out to execute this code that may sit idle for long periods of time. So you have this very simple website which would take the file, and then push it to a serverless instance that would crunch it, round the corners and then place the resulting file in some online storage with a download link presented to the client. The total computation and processing of that function could almost be free especially when sent to some massive infrastructure that processes code all day. So you are not occupying any compute online 24 hours a day but just tapping a service that can execute your code like a utility. So let me wrap this up and pray that I haven’t made this more complicated than it needs to be BUT when looking at these deployment models, one may sound better than another but they are really the continuum of how our application delivery technology is developing. The goal of this video is not to promote or push an angle since changing deployment models without changing the code is not something you can aspire to. You are relegated to the cloud deployment model you have, based on how the code is written. You can choose to code all new applications using something modern that runs on these newer cloud methods but you can’t simply take your existing application and deliver it through a cloud technology that can’t serve it to your clients at a reasonable price. I thank you for joining me this week for this explanation and I hope it shed some light on the various ways that applications are delivered.

Vlog 2 – Disadvantages of Cloud Computing

Introduction – Disadvantages of Cloud Computing

Estimated reading time: 6 minutes

This week we dig deep into the disadvantages of cloud computing. For a decent overview of cloud computing, check out our perpetually evolving What is Cloud Computing Resource page.

Disadvantages of Cloud Computing, Episode 2

The remainder of this post is the transcript to the video above

I think the biggest issue in our videos, is coming in from the upbeat music in our intro to someone like me at a much lower energy level talking about cloud. It reminds me of that famous Casey Kasem clip where he was upset at his producers for doing the same thing to him. (visual+audio clip)

Before we get into the disadvantages of cloud, let’s ask an important question– What did the cloud do for us? To answer, Cloud gave us nothing more than the ability to purchase compute in smaller increments than was possible before. Still speaking super high level here but the cloud is like It’s like connecting your home into municipal water. Pay by the gallon, works great for a household but you may get clobbered if you try to water an entire farm using municipal water. If you own a farm, you may need to dig your own well and maintain it yourself. So if you currently use 50 racks of hardware to run your IT, you don’t really need to buy in small increments unless this environment can scale down to almost nothing during off peak hours. Imagine taking your legacy database that requires 1TB RAM and running at 1 GB of RAM. That would probably would lead to bad things happening

While this video covers the disadvantages of cloud computing, don’t despair because we’ll have the advantages of cloud computing video coming soon.

So let’s get started with

Disadvantage #1 – Vendor Lock-in

Cloud Computing - Security-Lock
padlock and white computer keyboard on the wooden office table. privacy protection, encrypted connection concept

If you are managing your public cloud environment using the native console of that public cloud, there’s quite a bit of vendor lock-in if you ask me. Take for instance Azure, the steps required to move virtual instances out of Azure are many. You cannot just export your virtual instances even if you move it to a HyperV environment. While we are still talking about portability and vendor lock-in. Take for instance AWS, – you can export your virtual instances but there are many exceptions that will stop you. This list is long so I’ll post it to the screen and also thanks to techgenix for their write-up of the exceptions that are on the list.

At this time, there’s probably some of you throwing things at the screen saying that no one manages these instances using the AWS or Azure console because you’ve pulled management of the environment into an enterprise orchestration app that gives you a single pane of glass type of management. While the single pane of glass management app makes this way better, it’s usually the largest of enterprises using these tools. Many of our mid market clients are still using the actual console. Also, I didn’t mean to pick on just AWS and Azure, just some relatable examples there. But enough about portability and vendor lock-in for now as we move on to disadvantage #2 – instance sprawl. It’s easy to scale up instances in most cloud environments but there’s usually little that pulls those instances back when your work load decreases. Some clouds just don’t go back down, others can be brought back down but to automate this is tricky. We hear the word elastic, we think of snapping back like a waist band, but it’s more like a racket, it only rachets outwards and is locked in a single direction. If we remember the troubles of the world before cloud, we complained about under utilization and building to peak because we feared getting clobbered during heavy times of the year such as the holiday season.

Going into


Disadvantage #3

Legacy Scaling issues. A traditional 3 tier architecture is usually more expensive to run in the cloud. This is a topic that needs its own video so I won’t go into details . But quickly, if you can’t scale your database for performance (NOT failover) you are relegated to massive database instances that require TB’s of RAM to run. To get your database into public with the required IOPS – you may need to order a physical server in public cloud so now you’re basically back to private or hybrid cloud with the costs that come along with that. Also, physical servers aren’t elastic.

Moving onto

Disadvantage #4

I think when we initially were exposed to the cloud, we expected a lot of native advantages that would just come with it which gave us a false sense of security. So I have 3 of these false sense of securities but there are probably more. Since I am numbering these disadvantages, I’ll use letters for this sub category: I’ll start with 4a . The cloud isn’t inherently more or less secure but sometimes we feel there’s included additional security layered in there but there’s not. 4b. There’s no inherent disaster recovery by moving to the cloud. While virtualizing instances seemingly makes them more portable, virtualization isn’t something that is exclusive to the cloud and again, there’s still no DR built in. 4c. While perhaps enabling a better DevOps strategy because of market place integrations, the cloud doesn’t come with it’s own DevOps built in unless you built it yourself. Let’s move onto


Disadvantage #5

which deals with Billing formats. These bills from cloud providers appear to be setup for maximum transparency, letting you know of every second of compute you consume over a variety of products, but I find they aren’t human readable and have to be analyzed using software after putting to gather a solid tagging strategy if your cloud provider supports tagging.

Disadvantage #6

cactus - Disadvantages of Cloud Computing

With public cloud, we are still in somewhat unchartered territory. Outside of modern coded applications – there’s no super clear , applies to everyone, best practice for public cloud just yet. In the enterprise, we got to a point where we had a very good vision of what best practice looked like between blade servers, good hypervisors, and super fast storage but with the cloud there are so many ways to deploy.

In Conclusion:

This video, isn’t to disparage public cloud but we needed to outline that public cloud is not a panacea, and that enterprises with non born on cloud applications need to take particular care before moving this major move. Also, stay turned as we are working on a really good video about the advantages of cloud computing that should be out soon. Until next time, thank you for joining me once again.


The idea that sparked this blog post was me looking for a data center migration project plan that I could share with clients that wanted to brave that task themselves.

For the 1st topic of our Data Center VLOG series I thought I’d talk about why organizations leave one data center for another. Internally, we use our own project planning software when we help people migrate, but we didn’t have anything that was easy to share in a typical format like a word or excel.

screenshot of excel datacenter migration project plan spreadsheet

Click here to download the Data Center Migration Project Plan Sheet.

The Rest of this Post is the Transcript to our Video: Data Center Migration Project Plan

Since talking about a spreadsheet for 5 minutes would be painful to you and me, I instead outlined the reasons why folks leave data centers and then ultimately end up downloading a data center migration project plan.

Data Center Migration

Reason 1 – Chronic Outages

Folks move out of their server rooms so that they can leverage an economy of scale built by others which should offer better reliability BUT, Power, network and HVAC outages will cause IT managers to think about alternatives. I’d also like to mention that while these systems are inherently redundant in most data centers, a lot of times it’s the auto switching mechanisms will fail negating all the redundancies put in place by the facility.

Chronic Outages

Reason 2 – Power Density Mismatch

Another reason why folks leave data centers is due to what I call a power density mismatch. Here’s an example: someone moves in the data center with traditional server hardware. We’re talking about traditional 1-4U servers that may only need 4 kW of cooling. As that hardware gets near the end of its life, a decision is made to increase computing density by way of blade servers.

Here’s the problem, their cabinets are only able to cool between 4 to 8 kW and the blade servers if you stack them up can go way higher than that. 12 to 20 kilowatts is not unusual these days and so here’s what happens: you end up having to buy more cabinets and leaving some half filled so that the facility can keep up with cooling those cabinets.

Slightly off topic comment here – you can also get the opposite issue let’s say you purchase space in a datacenter that has very high density out of the gate like 20 kW+ and your equipment isn’t anywhere near that.. you’ll probably will end up paying a premium per cabinet and not getting your moneys worth since the build out of that data center was more costly to provide higher density in power which they have to pass along to you in the monthly fees.

Power Density Mismatch

Reason 3 – Bad Data Center Logistics

Logistics – Don’t get me wrong, security is important but chronic issues getting into a data center (especially after hours) will cause clients to leave. A data center thats offers 24 hours access but a ticket or phone call is required outside of business hours is problematic as waiting outside a data center at 3am for someone to let you in is very frustrating.

Reason 4 – Mergers and Acquisitions

Mergers and Acquisitions and Going out of Business. Reason number four we see is business model changes so you can have corporate mergers acquisitions that cause consolidations and unfortunately sometimes businesses go under meaning that they end up cancelling their colocation contracts.

Mergers and Acquisitions

Reason 5 – Vendor Data Center Consolidations

Datacenter consolidations – The fifth and final reason that we’re going to talk about today is a move initiated not by the client but by the datacenter themselves . Let’s say for instance a data center company has 2 facilities, each with 30% occupancy. There’s a good chance that at 30% occupancy, neither facility is profitable so they will close 1 facility, force those clients over to the other facility brining their occupancy up to 60% which makes them profitable but upsets their clients even if the move cost is fully borne by the data center themselves.


So the point of this video was to offer you a free downloadable tool that helps you plan your data center migration. It’s a full data center migration project plan in excel format and you can download it using the links on this page and so if you are planning to make a move anytime soon, please refer to this video for key questions to ask before signing a contract.