Cloud cost management – this term is widely known, but few people understand the immense details behind this process. Cloud services are a must-have for every business today – here cloud cost management is what unveils the hidden costs and wraps up the ultimate cost-effective deal for your business cloud requirements. But to understand the process of cloud cost optimization – you first need to understand what cloud infrastructure is and how it embeds itself into the digital world.

What is cloud cost management?

Understand Cloud Cost Management First,
So you can save your pocket later!

Cloud services are not wizardry – they offer a clear-cut vision of the resources and costs that can be incurred. But only the IT managers keen enough to evaluate the facts can make tools available to them. The accuracy, speed, and agility that make cloud services an asset – can quickly become an unbearable liability without smart planning & attention.

Multi-cloud deployments are in high demand today – which makes it necessary to have a pristine & transparent knowledge of cloud cost management strategies. Having in-depth knowledge about your cloud costing can create an air of accountability & also improve your utilization of each cloud resource.

But what kind of checkpoints are vital in the cloud cost management discipline?

Cloud Cost Management

Budget Setting: Establish spending limits to prevent cost overruns.

Spending Restrictions: Implement controls to manage expenses effectively.

Cloud Resource Utilization: Optimize resource use to stay within budget.

Cost Optimization: Identify and eliminate wasted resources, and right-size instances, and leverage reserved capacity.

Usage-Based Pricing: Employ pricing methods based on actual resource usage.

Cost Visibility: Gain insight into expenditures, identify cost-driving resources, and classify spending.

Forecasting: Predict future costs using historical data and consumption trends.

Financial Planning: Use advanced cost management solutions for accurate predictions.

Cost Allocation: Allocate costs to stakeholders in multi-tenant environments to enhance accountability.

Resource Efficiency: Utilize cloud resources efficiently, including serverless computing and autoscaling.

Governance and Policies: Implement rules aligning resource provisioning with enterprise norms.

Collaboration: Foster communication and understanding between IT, finance, and other departments.

Continuous Improvement: Monitor, analyze, and adapt strategies to changing workloads and business demands.

Cloud cost Models – Something you can’t miss out on!

Cloud pricing doesn’t have to be complicated. The costs behind your cloud services actually boil down to 3 main approaches – value-based, market-based, and cost-based pricing.

With value-based pricing, the provider charges you depending on how much value you receive from the service. The more essential the service is to your business, the higher the price.

Market-based pricing fluctuates based on current demand and availability, like an airline ticket. Prices go up when demand is high or capacity is limited.

Cost-based pricing is simply the provider charging you based on their expenses in maintaining the service. This pricing stays relatively fixed.

Cloud services themselves also have different pricing models – per usage/consumption, fixed monthly or yearly fees, or even auction-style spot instances.

By understanding what drives the different cloud pricing approaches and models, you can better budget for services and pick the offerings that make financial sense for your needs. The key is choosing the RIGHT model for the use case rather than default bargains.

Your Personal Cloud Guide – Hire a Cloud Consultant

Transitioning business operations to the cloud is smart. But for many leaders, the tech complexities can get overwhelming fast.

That’s where we come in.

As experienced cloud computing consultants focused on humanizing tech, we become your business’ personal guide to the cloud. We take the time to understand your operations, goals, and pain points first.

Then, we create a tailored step-by-step cloud strategy using simple, jargon-free language. We’ll explain how the latest cloud infrastructure, platforms, data tools, and apps can solve growth struggles that keep you up at night – without adding IT headaches.

With deep expertise across all aspects of the cloud from leading providers, we’re constantly scouting the best solutions suited for companies just like yours. From streamlining workflows to reaching new customers, our personalized cloud roadmaps accelerate business success.

Whether you’re new or struggling to get more from current cloud investments, we’re here to answer questions, provide unbiased recommendations, and essentially be your cloud sherpa. Let’s climb higher together.

Demystifying Cloud Consultant Costs – What Impacts Your Investment?

Wondering what goes into the fees behind hiring a cloud consultant? When budgeting for expert guidance tailored to your business, three core factors shape your investment:

### Their Know-how 

Entry-level cloud expertise may seem more affordable. But advanced experience brings huge value by setting you up for long-term success. When evaluating consultants, prioritize industry-specific cloud insights over cost savings.  

### Scope of Impact  

Are you seeking quick cloud advice or an in-depth multi-phase roadmap for transformation? Clearly define the scope and span of what you want to achieve. Limited engagements are cheaper than extensive partnerships. But think long-term – a bigger upfront investment in the right cloud integrations saves over time.  

### Specific Services

Do you just need a cloud migration plan or full deployment and optimization? Cloud consultants offer varied services at different price points. Focus on the specific expert skills that best serve your goals instead of unnecessary services that inflate budgets.

The cloud allows growing companies to punch above their weight. And the right consultant helps land those blows. As your personal cloud guide, we can pinpoint exactly what your organization needs to gain every advantage – at a cost structure that works.

Wrapping up,

Managing cloud costs is a complex undertaking, but imperative in fully realizing the ROI of cloud services. As your organization continues leveraging the flexibility and opportunities of cloud computing, a lack of visibility and control over expenditures can quickly create bloated bills.

By implementing the cloud financial management best practices covered here, you can optimize usage, maximize savings, and align cloud spending to business values. Partnering with an experienced cloud consultant takes these efforts a step further through their expertise, oversight, and actionable recommendations tailored to your environment.

The healthcare industry is one of the most data-intensive industries in the world. Healthcare organizations must manage patient records, clinical trials, research data, and more. In addition, healthcare data is often sensitive and must be protected from unauthorized access.

Despite the fact that cloud-based healthcare solutions offer many benefits, 69% of respondents in 2018 indicated that the hospital they worked at did not have a plan for moving existing data centers to the cloud.

However, there are several reasons why hospitals should consider moving to the cloud. Cloud-based solutions can offer increased security, as data is stored off-site and is, therefore, less vulnerable to theft or damage. Additionally, cloud-based solutions can be more cost-effective than on-premise solutions, as hospitals can avoid the upfront costs of purchasing and maintaining their own servers.

There are many reasons why healthcare organizations are turning to cloud-based solutions for storing and protecting data. Chief among them is compliance with the EMR Mandate. Finally, cloud-based solutions offer organizations a way to improve patient care. By storing patient records in the cloud, healthcare providers can share information more quickly and efficiently, leading to better overall patient care.

How Cloud Computing Can Benefit Healthcare Organizations:

Improved Security

One of the biggest concerns for healthcare organizations is data security. Healthcare data is often sensitive and must be protected from unauthorized access. Hackers are also increasingly targeting healthcare organizations in an attempt to steal patient data.

Cloud computing offers improved security for healthcare data. Cloud providers use a variety of security measures to protect data, including physical security, firewalls, and encryption. In addition, cloud providers have expertise in data security and can offer guidance on how to best protect healthcare data.

Lower Costs

Another advantage of cloud computing is lower costs. Healthcare organizations can save money by using the cloud instead of purchasing their own hardware and software. In addition, cloud providers often offer discounts for large volume customers.

Increased Flexibility

Cloud computing also offers increased flexibility for healthcare organizations. With the cloud, organizations can scale up or down quickly to meet changing needs. Organizations can also access data and applications from anywhere at any time. This is especially beneficial for remote workers or clinicians who need to access patient records while away from the office.

Enhanced Collaboration

Healthcare is a highly-regulated industry, which can make it difficult for different organizations to share data and work together effectively. However, with cloud computing, other healthcare organizations can share data and applications quickly and securely, which can help to improve patient care.

Improved Access to Data and Applications

In the past, many healthcare organizations have struggled with managing and accessing their data due to its sheer volume. With cloud computing, this data can be stored off-site and accessed as needed, which can save a lot of time and hassle.

Increased Efficiency 

By storing data and applications in the cloud, healthcare organizations can free up valuable IT resources that can be better used elsewhere. Additionally, cloud computing can help to automate many tasks that are currently done manually, which can further improve efficiency.

Better Patient Care

Ultimately, the goal of any healthcare organization is to provide the best possible care for its patients. Cloud computing can help to achieve this goal by providing better access to data and applications, enhancing collaboration, and increasing efficiency. By making it easier for healthcare organizations to do their job, cloud computing can ultimately help to improve patient care.

Cloud computing has already had a big impact on a number of different industries, and healthcare is no exception. The healthcare sector is under increasing pressure to do more with less, and cloud computing can help healthcare organizations. Cloud providers can offer a number of advantages to healthcare organizations. They can provide the ability to scale up or down quickly and easily to handle changing needs, and they can offer pay-as-you-go pricing that can help save money on IT costs.

In addition, cloud providers can offer security and compliance services that can help keep patient data safe. And finally, cloud providers can offer access to a variety of applications and services that can help streamline processes and improve patient care. As your application scales, it’s important to understand how different public cloud providers can work together to provide the best possible experience for your users. There are synergies and advantages that must be taken into account when choosing a provider, in order to ensure optimal performance and uptime. To understand it in detail you can look out to experts such as 515 Engine to guide your organization in the right direction.

Estimated reading time: 9 minutes

This post was originally a vlog post from October of 2020. For reference, we post our vlogs here with the full transcript below along with clarifying comments and graphics. You can check out all our vlogs on our YouTube channel.

This week we tackle the question, “Is Virtualization Needed for Cloud?” which is pretty interesting and of course requires several long-winded corporate stories. If you don’t see the embedded video above, here’s a direct link.

Is Virtualization Needed for Cloud Computing?

Today we want to answer the question, Is Virtualization Necessary for Cloud Computing? Hopefully, by the end of this short video, I’ll be able to outline what Cloud Computing is and if it’s necessary for cloud computing. Just a quick caveat, this video is going to exclude virtualization as it applies to microservices and networks.

I can’t tell tell you the answer to the question Is Virtualization Needed for Cloud Computing until I recount some sordid tale of my corporate past. When I first got into the server business a long time ago, enterprises shied away from putting 3 tier architecture components on the same server. Your 3-tier architecture had web, app and database. These were almost always 3 different machines for reliability and better intercommunication. When apps were being developed or tested, you would have possibly seen them all on 1 machine.


As we’ve discussed in the past, in 3 tier architecture, Web servers only presented data, application servers ran the business logic, and database servers stored the data. Also, these servers were also inherently redundant so they were costly. Once you had them configured, they’d sit there, idling. So you had 3 servers that were at least $2-$5k apiece, sitting there idling. If you measured the overall usage of these machines, it wouldn’t be unusual for them to be at 1-10% utilization. If your marketing department thought that a million visitors were going to come to your website, you tended to do something called building to peak which was building the scale of your environment to accommodate the worst-case scenario possible on the high side. So if you thought you were going to have a million visitors, you’d have to build for 1 million visitors even though only 50,000 visitors may have made it to the site. This was super costly but there were early startups that got media attention (usually TechCrunch) and the site would literally crash which was a bit embarrassing for a startup especially one in tech.

So, to answer the question, “is virtualization necessary for cloud computing”, we need to wait a bit longer. I’d like to do a quick recap on what virtualization is. Virtualization allows you to put multiple logical servers on a single a server. So you have this large server, you install a hypervisor (B-ROLL – definition) and then divide the resources of that ONE server into small chunks which operate independently of each other logically. This way you get the specialization that enterprise computing demands, without having to allocate a single server to each discrete function.

Why Normalization Is Good

Here’s another problem that virtualization solved which is probably the most amazing benefit we get from virtualization. So strap for another sordid corporate story!


When I worked in the managed hosting business back in the late ’90s, we would manage systems for clients. Let’s say you had a very specific HP server sitting in a cabinet and you had an identical cold spare sitting next to it. Sometimes, the running server would fail – so you’d take the hard drives out of the failing machine and put them into the cold spare and you’d turn it on. In some cases, that computer may not turn on and start working ALTHOUGH in theory it really should have. There were a bunch of small nuances that would have caused this cold spare not to work. For instance, this could have been due to a difference in firmware between the servers, or some slight change in manufacturing versions – even those the model numbers were identical. It may have been a patch that was applied to one machine but never initialized because the server hadn’t been rebooted in months. It could have been something else so you were never guaranteed an easy recovery. In the end, this meant that you had to image a new machine from scratch, carefully reinstall all the applications, then copy the data off those hard drives and copy it back over to the news servers, test, and then run. So at the most basic level, what does virtualization does for us It allows you to have those single-purpose servers, still separated in every regard but sitting on the same piece of hardware. This allows you to almost fully consume the resources of each server you put into production with very little idling but if you are to take anything away from this video, it’s the resolution to the story I just told — Virtualization allows us to “normalize” CPU, RAM and storage resources. Normalization is the idea that types and origins no longer matter. CPU, RAM, Storage Space, and Network are made common. Where they come from, what brand of hardware no longer matters. In other words, the origin of those resources isn’t taken into account – they just need to be there.

Virtualization allowed the modern cloud

Where Virtualization Helps

Here is why (finally) virtualization is needed for cloud computing. When we built systems for clients before, we’d allocate specific servers to them. The fact that those servers were used 1% or if they were perpetually clobbered 24 hours a day it didn’t matter. Those servers cost the service provider these things: the full capital cost of the server, network, software licensing, monitoring, management, space, power, cooling. This server also could only be used by 1 client. Putting someone else on this server would be impossible and there was no easy way to logically separate this single resource amongst clients. (There was something called shared hosting, but it wasn’t something most enterprises would tolerate unless it was a simple brochure type of website). This is akin to renting an apartment to a family and If the family goes out to dinner, you can’t rent that apartment for 3 hours while they are out to another family.

Going back to cloud computing where you build a platform full of servers and you sell access to the cumulative resources that those servers offer. So you buy a handful of servers and that could give you 10 CPUs, 10 GB RAM and 10GB of storage. You can sell this at an hourly rate and when it’s given back to you, you can have it virtually “rented” by another entity. So the buyer benefits because they can buy in bite sized chucks and the service provider benefits because they fully utilize their capitol investment of servers. Of course, they will need to build a little overhead for surges of business but the more automation they employ, the more competitive their rates can be. From an uptime perspective, since these instances are virtual, fully inclusive, and lighter you can start, stop, clone and move them around in a way not possible before.

Buy Your Compute By the Drink

This is interior of modern european pub.

So cloud computing is really predicated on the premise that you have virtualization. The virtualization doesn’t make the raw compute necessarily cheaper but it does 2 important things: 1. Buyers can buy in small chunks with little to no commitment. 2. Sellers can sell their entire capacity and scale their environment without having to match their hardware thanks to the normalization we discussed earlier. If you need 10 racks of hardware, buying that in the cloud will probably be more expensive but the fact you could slowly have scaled up to 10 racks of gear 1 CPU at a time is the real benefit here.

Last little story, that involves virtualization and VMWare. The first time I saw VMWare run was on a friends laptop. He was an engineer for a software company and his role was to take some of the applications his prospective client was using, and create a proof of concept on how his software would help them integrate these various applications. So he’d literally mock their environment up on his laptop and the load test data and demonstrate to the client how the integration would look like using their actual applications and test data.

Playing Rugby

Installing a Windows Server OS, SQL Server and several enterprise applications on a laptop is not the best idea if you need to still check your email, fill out expense reports, collaborate with your colleagues, and not have the IT run down the hall and tackle you. So he would build these proofs of concepts on a Virtual instance that was fully isolated on his laptop. Once the client saw the proof of concept, he could delete that VM instance which was literally a single file or copy it off to another device. Meanwhile, his pristine laptop imaged by IT the day he was hired was in perfect shape since all that craziness lived in its own, self-contained instance on his laptop in a single, virtual hard drive file.

I hope this video helped explain how virtualization helped the cloud become the cloud and hopefully I answered the question Is Virtualization Needed for Cloud for you. The overall idea is that you are abstracting operating system instances from hardware. To think about micro services which we covered last time, you are abstracting code from the operating system. (Define abstract) but we’ll cover that in a future video when we go into the world of DevOps. Thank you again, and I look forward to seeing when we release our next video!

We have an evolving library of Cloud Computing resources that you can use for any research you have.


“Why is Cloud So Expensive?” is a question that no one wants to ask or answer after completing the challenging task of moving workloads out of legacy platforms.

In this post, we look at one case study of a Software as a Service Provider (SaaS) that had predicable costs in private cloud and found themselves spending more in public cloud. We’ll attempt to illustrate why this was.

  • About 87% of our clients have applications that are based on 3 tier and not born on cloud architecture.
  • Our client had a strong development team but a small operations team
  • Public clouds can easily cost as much, if not more than managed private cloud.

Cost Summary Over Time

In the presentation below, you can see that my client had these costs:

  • 2015 – Private Cloud @ $17,000 (fully managed by private cloud)
  • 2017 – Public Cloud + Consultants @ $24,000 (Public IaaS + Consultants to help manage gaps in scope like a DBA)
  • 2019 – Public Cloud + Certified Managed Services Provider @ $20,000 (Public IaaS + MSP to fully manage all)

We also noticed that the ad-hoc consulting costs didn’t offer the consistency or ownership exhibited by a specialized Managed Services Provider (MSP).


A Quick Note on Economies of Scale

  • A NOC (Network Operations Center) is a costly operation, it’s better to share this resource.
  • Monitoring, patching, trouble ticketing, event correlation, and inventory management systems are best shared amongst many organizations.
  • Engineers are difficult to recruit and retain unless you are an IT centric organization.

Economies of scale are usually talked about when it comes to hardware but they can also apply to expertise and staffing. A fully functioning NOC with all levels of support is costly to build and maintain.

Cost Overview from 2015 – 2019

The chart below takes you through what my client spent in 2015, 2017 and finally in 2019.


So with this case study, we find that to answer the question, Why is cloud computing so expensive we can answer with this:

  • economies of scale apply to both hardware and personnel.
  • public cloud can be quite amazing for born on cloud code but not great for 3 tier, vertically scaling architecture.
  • public clouds are better costs for horizontally scaling environments.
  • it’s easy to turn on server instances and underutilize them.
  • bringing in consultants with a single focus can be costly.
  • consultants are great at solving point in time issues, but may not be the best in overall site ownership


When moving to public cloud, it’s important to take into account that if your systems were fully managed before, you’ll need to be fully managed after the move as well.

Estimated reading time: 10 minutes


We repost our vlog here along with the full transcript. If your embed above isn’t working, here is the direct link to the video here.

If you’d like a bit of background on what cloud computing is, check out our What is Cloud Computing page.

Cloud Deployment Models

Search for the phrase cloud deployment models and you’ll end up getting articles that reference public, private, and hybrid clouds. In some instances, they’ll use the word Multi-cloud as well. While that’s accurate, I really wanted more depth and nuance to the topic. The Public, Private and Hybrid cloud terms aren’t prevalent in many of the discussions I have with my clients. This is a pretty vast topic so I hope to do it justice. Please note, this is certainly by no means authoritative because there are many ways you can view and categorize the various ways to deploy your systems into the cloud.

A Tangent About Horizontal vs Vertical Scaling

Before we get into the deployment models used in cloud, I’d like to start with brief primer on horizontal and vertical scaling because I’ll use these terms a lot in this video. Simply put: Vertical scaling…. Take your web, app and DB layers of the environment. Picture them on top of each other and remember there are limitations when sharing the workload at the application and database layers of the environment. Let me pick on Microsoft SQL server for a moment: Clustering a Microsoft SQL database was really about failing it over when 1 of the 2 servers went down. It wasn’t designed to share the load amongst 2 machines. Said differently, originally Microsoft SQL clusters were for uptime and not for processing more work. To scale this environment, because you were relegated to using a single database server at time, your options were to add CPU and RAM to the same single server. So that server got bigger (this way), and so we represent that as being vertical. You could not easily scale it to a second machine which is what horizontal scaling refers to. Also, Microsoft has long since addressed this issue and other database vendors struggled with this as well and so I only use it as an example. Application servers suffer from this same issue. If the logic isn’t built into the application enabling it to look for other instances to process the workload, you end up having to build massive single instances of servers. Let’s move on to horizontal scaling… imagine that you can take these original 3 tiers and you can have an array of these devices going left and right as wide as you’d like. Each server can literally be garbage because if one of them fails, they can be kicked out of the rotation and the load is split amongst the remaining working servers. We’ve done this fantastically at the presentation, or web layer forever. When you are able to do this at all 3 layers, you start moving in the direction of being elastic. { } Let’s move into what the video was really supposed to be about which is the different methods of cloud deployment we see. So I’d like to outline all the options available to us. Some housekeeping stuff: I will lump somethings in here that aren’t considered cloud at all, but they are still relevant to the discussion and so I’ll keep them in. Also, I am going from oldest to newest technology. Lastly, I am not going to dive deep, this is a super technical topic which would be hard to cover int he 10 minutes we get together each week. So, Don’t laugh, but I am going to start with



While you’d think mainframes are the opposite of cloud, in a manner of speaking it could be argued that mainframe is the original cloud but without the internet to make it universally accessible. Let me explain. You had massive horsepower in a single concentrated machine that could be partitioned to run different functions. Folks from around the company would connect to this using fairly low powered terminals and all the data resided in a single area. Take a mainframe, give access to it over the internet and it’s cloud. Just a really old, hard to maintain and expensive cloud. The Second deployment method is:

Client Server

is not really cloud but has been adapted to the cloud. The easiest way to outline client/server is to think of any desktop application you use. It’s an executable file that runs on a computer and its data is typically stored on that same computer. There’s no web, app, and database layer to it but it’s still a staple for many businesses. In the healthcare field, many applications are still client-server with software publishers trying to offer hosted versions of those applications. To “enable” it for the cloud, You basically are allowing folks to all login to a terminal server which “presents” the application screen to you, almost like streaming a video of the screen. If you’ve ever used log me in, or if you’ve gotten remote IT support where your keyboard and mouse are taken over by a technician, it’s very similar. So the terminal services are how these are deployed in the cloud but the application scaling itself is a bit tricky. To scale, the application is run on one machine and the database on another for scaling and you just get more RAM and more processors to scale. Most of those software providers I imagine are working on re-coding their application to run in the cloud using a more modern method. #3:


3 Tier Web Application

This was the happening thing when I started in IT in the 90’s. There are 3 tiers: Web, Application and Database. Users could use any web browser and they experienced The Web layer which does the presentation of the data, the application layer processed the logic and was really the brains of the application, and the database stored all the data. Many of these applications scaled vertically, meaning you’d just get bigger servers but with the exception of the web or presentation layer which allowed you to scale using load balancing. In time, applications and databases began to more easily be load balanced which allowed them to scale horizontally. This allowed the distribution of processing to leave that original app or database server for one sitting next to it. If there was a version 2 of 3 tier application architecture it would have been virtualization. Method #4 is



based architecture. Since this is new and just so different than what we are used to, it may take the longest to explain. If you are familiar with the way an API works, you can quickly grasp what microservices-based architecture is. Rather than writing a lot of code into a monolithic, single program with all the components in one place. You build containers for your application. Each container contains all the libraries needed to execute a specific aspect of the software in a very single tasked sort of way. Let me do a simple example, you build an application for your online store. You have a shipping microservice in your application, which would receive an address and a weight from the invoicing microservice and compute the shipping cost and estimated delivery times and hand it back to the invoicing microservice. Let’s say, it’s holiday time and the invoicing microservice if getting clobbered, the orchestration and scheduling app would fire up more invoicing microservices and would let the system know, hey, there’s 2 of me of now and you are free to request invoice creation from me. As this increases, the load on the shipping microservice will increase too. Scheduling and Orchestration will say, hey, we need to spin up another instance of this shipping microservice until I notify you otherwise. If the app is truly elastic, it will note the decrease in requests after this surge, and turn off those extra instances, meaning you are handing resources back to the infrastructure. If this is a public cloud, you are no longer being charged for this computing resource. The take away is you’ve broken your application into small sections, each responsible for processing very specific tasks and being available always to provide the computation that is requested of it on demand. This model is very easy to scale horizontally again meaning you can scale instances left and right of that instance with lower-powered containers and then share the compute load over those new instances. You don’t need to add massive CPU and RAM to single instances which is what we did when we scaled vertically.


Serverless Computing

is our 5th deployment method. This of serverless this way. It’s a system that just runs code. It says, give me your code, I’ll execute just your code and I’ll output it to where you want as many times as you ask me to do so. In the serverless world major players would be the likes of AWS Lambda or Googles Firebase. You can set up your code to automatically trigger from other cloud services or call it directly from any web or mobile app. Serverless really shines is if you have a process that kicks off that needs horsepower but in small intervals and the results are handed off to some other infrastructure. I think serverless is probably the trickiest of all these models to explain so I’d like to just give a real world example of how something like this could be used. Lets say you run a site where you take a rectangular profile picture and you spit it back out to end users with rounded edges which incidentally is a service I’ve used because my photoshop skills are garbage. So you would have a simple website, and folks would upload their rectangular picture to the server. To convert this, you would need some code to run a bunch of computations and convert the image. It may not be worth having an entire environment built out to execute this code that may sit idle for long periods of time. So you have this very simple website which would take the file, and then push it to a serverless instance that would crunch it, round the corners and then place the resulting file in some online storage with a download link presented to the client. The total computation and processing of that function could almost be free especially when sent to some massive infrastructure that processes code all day. So you are not occupying any compute online 24 hours a day but just tapping a service that can execute your code like a utility. So let me wrap this up and pray that I haven’t made this more complicated than it needs to be BUT when looking at these deployment models, one may sound better than another but they are really the continuum of how our application delivery technology is developing. The goal of this video is not to promote or push an angle since changing deployment models without changing the code is not something you can aspire to. You are relegated to the cloud deployment model you have, based on how the code is written. You can choose to code all new applications using something modern that runs on these newer cloud methods but you can’t simply take your existing application and deliver it through a cloud technology that can’t serve it to your clients at a reasonable price. I thank you for joining me this week for this explanation and I hope it shed some light on the various ways that applications are delivered.

Vlog 2 – Disadvantages of Cloud Computing

Introduction – Disadvantages of Cloud Computing

Estimated reading time: 6 minutes

This week we dig deep into the disadvantages of cloud computing. For a decent overview of cloud computing, check out our perpetually evolving What is Cloud Computing Resource page.

Disadvantages of Cloud Computing, Episode 2

The remainder of this post is the transcript to the video above

I think the biggest issue in our videos, is coming in from the upbeat music in our intro to someone like me at a much lower energy level talking about cloud. It reminds me of that famous Casey Kasem clip where he was upset at his producers for doing the same thing to him. (visual+audio clip)

Before we get into the disadvantages of cloud, let’s ask an important question– What did the cloud do for us? To answer, Cloud gave us nothing more than the ability to purchase compute in smaller increments than was possible before. Still speaking super high level here but the cloud is like It’s like connecting your home into municipal water. Pay by the gallon, works great for a household but you may get clobbered if you try to water an entire farm using municipal water. If you own a farm, you may need to dig your own well and maintain it yourself. So if you currently use 50 racks of hardware to run your IT, you don’t really need to buy in small increments unless this environment can scale down to almost nothing during off peak hours. Imagine taking your legacy database that requires 1TB RAM and running at 1 GB of RAM. That would probably would lead to bad things happening

While this video covers the disadvantages of cloud computing, don’t despair because we’ll have the advantages of cloud computing video coming soon.

So let’s get started with

Disadvantage #1 – Vendor Lock-in

Cloud Computing - Security-Lock
padlock and white computer keyboard on the wooden office table. privacy protection, encrypted connection concept

If you are managing your public cloud environment using the native console of that public cloud, there’s quite a bit of vendor lock-in if you ask me. Take for instance Azure, the steps required to move virtual instances out of Azure are many. You cannot just export your virtual instances even if you move it to a HyperV environment. While we are still talking about portability and vendor lock-in. Take for instance AWS, – you can export your virtual instances but there are many exceptions that will stop you. This list is long so I’ll post it to the screen and also thanks to techgenix for their write-up of the exceptions that are on the list.

At this time, there’s probably some of you throwing things at the screen saying that no one manages these instances using the AWS or Azure console because you’ve pulled management of the environment into an enterprise orchestration app that gives you a single pane of glass type of management. While the single pane of glass management app makes this way better, it’s usually the largest of enterprises using these tools. Many of our mid market clients are still using the actual console. Also, I didn’t mean to pick on just AWS and Azure, just some relatable examples there. But enough about portability and vendor lock-in for now as we move on to disadvantage #2 – instance sprawl. It’s easy to scale up instances in most cloud environments but there’s usually little that pulls those instances back when your work load decreases. Some clouds just don’t go back down, others can be brought back down but to automate this is tricky. We hear the word elastic, we think of snapping back like a waist band, but it’s more like a racket, it only rachets outwards and is locked in a single direction. If we remember the troubles of the world before cloud, we complained about under utilization and building to peak because we feared getting clobbered during heavy times of the year such as the holiday season.

Going into


Disadvantage #3

Legacy Scaling issues. A traditional 3 tier architecture is usually more expensive to run in the cloud. This is a topic that needs its own video so I won’t go into details . But quickly, if you can’t scale your database for performance (NOT failover) you are relegated to massive database instances that require TB’s of RAM to run. To get your database into public with the required IOPS – you may need to order a physical server in public cloud so now you’re basically back to private or hybrid cloud with the costs that come along with that. Also, physical servers aren’t elastic.

Moving onto

Disadvantage #4

I think when we initially were exposed to the cloud, we expected a lot of native advantages that would just come with it which gave us a false sense of security. So I have 3 of these false sense of securities but there are probably more. Since I am numbering these disadvantages, I’ll use letters for this sub category: I’ll start with 4a . The cloud isn’t inherently more or less secure but sometimes we feel there’s included additional security layered in there but there’s not. 4b. There’s no inherent disaster recovery by moving to the cloud. While virtualizing instances seemingly makes them more portable, virtualization isn’t something that is exclusive to the cloud and again, there’s still no DR built in. 4c. While perhaps enabling a better DevOps strategy because of market place integrations, the cloud doesn’t come with it’s own DevOps built in unless you built it yourself. Let’s move onto


Disadvantage #5

which deals with Billing formats. These bills from cloud providers appear to be setup for maximum transparency, letting you know of every second of compute you consume over a variety of products, but I find they aren’t human readable and have to be analyzed using software after putting to gather a solid tagging strategy if your cloud provider supports tagging.

Disadvantage #6

cactus - Disadvantages of Cloud Computing

With public cloud, we are still in somewhat unchartered territory. Outside of modern coded applications – there’s no super clear , applies to everyone, best practice for public cloud just yet. In the enterprise, we got to a point where we had a very good vision of what best practice looked like between blade servers, good hypervisors, and super fast storage but with the cloud there are so many ways to deploy.

In Conclusion:

This video, isn’t to disparage public cloud but we needed to outline that public cloud is not a panacea, and that enterprises with non born on cloud applications need to take particular care before moving this major move. Also, stay turned as we are working on a really good video about the advantages of cloud computing that should be out soon. Until next time, thank you for joining me once again.


The idea that sparked this blog post was me looking for a data center migration project plan that I could share with clients that wanted to brave that task themselves.

For the 1st topic of our Data Center VLOG series I thought I’d talk about why organizations leave one data center for another. Internally, we use our own project planning software when we help people migrate, but we didn’t have anything that was easy to share in a typical format like a word or excel.

screenshot of excel datacenter migration project plan spreadsheet

Click here to download the Data Center Migration Project Plan Sheet.

The Rest of this Post is the Transcript to our Video: Data Center Migration Project Plan

Since talking about a spreadsheet for 5 minutes would be painful to you and me, I instead outlined the reasons why folks leave data centers and then ultimately end up downloading a data center migration project plan.

Data Center Migration

Reason 1 – Chronic Outages

Folks move out of their server rooms so that they can leverage an economy of scale built by others which should offer better reliability BUT, Power, network and HVAC outages will cause IT managers to think about alternatives. I’d also like to mention that while these systems are inherently redundant in most data centers, a lot of times it’s the auto switching mechanisms will fail negating all the redundancies put in place by the facility.

Chronic Outages

Reason 2 – Power Density Mismatch

Another reason why folks leave data centers is due to what I call a power density mismatch. Here’s an example: someone moves in the data center with traditional server hardware. We’re talking about traditional 1-4U servers that may only need 4 kW of cooling. As that hardware gets near the end of its life, a decision is made to increase computing density by way of blade servers.

Here’s the problem, their cabinets are only able to cool between 4 to 8 kW and the blade servers if you stack them up can go way higher than that. 12 to 20 kilowatts is not unusual these days and so here’s what happens: you end up having to buy more cabinets and leaving some half filled so that the facility can keep up with cooling those cabinets.

Slightly off topic comment here – you can also get the opposite issue let’s say you purchase space in a datacenter that has very high density out of the gate like 20 kW+ and your equipment isn’t anywhere near that.. you’ll probably will end up paying a premium per cabinet and not getting your moneys worth since the build out of that data center was more costly to provide higher density in power which they have to pass along to you in the monthly fees.

Power Density Mismatch

Reason 3 – Bad Data Center Logistics

Logistics – Don’t get me wrong, security is important but chronic issues getting into a data center (especially after hours) will cause clients to leave. A data center thats offers 24 hours access but a ticket or phone call is required outside of business hours is problematic as waiting outside a data center at 3am for someone to let you in is very frustrating.

Reason 4 – Mergers and Acquisitions

Mergers and Acquisitions and Going out of Business. Reason number four we see is business model changes so you can have corporate mergers acquisitions that cause consolidations and unfortunately sometimes businesses go under meaning that they end up cancelling their colocation contracts.

Mergers and Acquisitions

Reason 5 – Vendor Data Center Consolidations

Datacenter consolidations – The fifth and final reason that we’re going to talk about today is a move initiated not by the client but by the datacenter themselves . Let’s say for instance a data center company has 2 facilities, each with 30% occupancy. There’s a good chance that at 30% occupancy, neither facility is profitable so they will close 1 facility, force those clients over to the other facility brining their occupancy up to 60% which makes them profitable but upsets their clients even if the move cost is fully borne by the data center themselves.


So the point of this video was to offer you a free downloadable tool that helps you plan your data center migration. It’s a full data center migration project plan in excel format and you can download it using the links on this page and so if you are planning to make a move anytime soon, please refer to this video for key questions to ask before signing a contract.


It’s not that I don’t want people to migrate to AWS. For several years, I’ve witnessed many IT professionals move to AWS before I had a chance to assess their application. So far, I’ve seen mixed results as far as performance and cost. Many of these people have been kind enough to share their experience with me, which in turn I’ll share with you today.

Some AWS History

I am sure many web hosting firms are still wondering where they went wrong. In a brief look back, it appears that AWS came out of nowhere and passed industry giants that were in the business during the dot-com boom. I think the success of AWS was a result of these things: 

  1. You can do anything in the AWS console with almost no limitation. Things that before took a massive effort and support tickets are done instantly.
  2. The pricing was never a mystery, you can look at the pricing calculator and know what you were going to spend beforehand. (Auto-scaling issues aside).
  3. You could enter your credit card and instantly have access to services.
  4. AWS started off by giving 1-year free trial accounts. Once built, who wants to move their application?
  5. If you got upset or received poor service, you weren’t bound by a contract.
  6. AWS had brand recognition thanks to Amazon being an online retail force known for solid customer service.

Examples of Moves to AWS that Went Well

I have clients that made the move to AWS that were successful. Here are some examples of clients that made the move to AWS and were better off for it. I’ll do my best to outline the circumstances that made it a good fit for them.

AWS Success story #1 – Moved from Heroku

For those on Heroku, there’s a high probability that you will fit well in AWS due to the modernity of your application and architecture. There’s a good chance that the application is container-based and horizontally scaling which makes a good fit in AWS with their Elastic Beanstalk product. 

AWS Success story #2 – Had orchestration / software-defined datacenter in place

I have clients that use orchestration in their software-defined data centers. The orchestration they implemented was made possible by different software vendors. Here are some that we’ve encountered:

Saltstack – Allows Intelligent Orchestration of Software Defined Datacenters.
Redhat Cloudforms – Cloudforms also allows intelligent orchestration of public and private virtual instances and containers. Not only can this be used with AWS, but it’s also able to orchestrate between on-premise and Azure clouds. You may be familiar with Cloud forms if you used its Open Source counterpart called ManagedIQ.  (Disclosure: we are a Redhat partner)

Because there was orchestration in place, VM sprawl is kept in check along with a host of other policies which keep the environment secure and properly utilized. 

AWS Success Story #3 – Application built on AWS initially (DevOps)

I have a client that runs a SaaS that serves HR recruiters. My client needed a good place to build the application where the developers could all collaborate. Since this was built on AWS, it naturally fit the AWS ecosystem when they went live. 

AWS Not So Successful Stories

We’ve also worked with those that haven’t had great experiences with AWS.  Here are some examples of those cases:

AWS Not So Successful Story # 1 – Vertically Scaling App with Large Databases

There are many legacy applications out there that don’t do great when virtualized. Although AWS can give you really large server instances, they are still virtualized. This is particularly an issue for larger databases that cannot be clustered for performance or sharded. My client runs a SaaS where there’s a database for each client. This means that the database server has a ton of work and the IOPS are just too low in virtualized instances. Our client is best served by a cloud provider that can mix virtual and physical servers. While this may seem old or dated, there are many firms that have an application that feels like it’s multi-tenant but behind the scenes it’s old fashioned. Old fashioned doesn’t mean unreliable. 

AWS Not So Successful Story #2 – Legacy Application on Sparc Solaris

Ok, so this application never made it to AWS but in an evaluation, AWS was on the list of places to check out. To AWS’ credit, there are 23 pages of supported platforms they can handle but Solaris isn’t one of them.  The client had an ERP that excelled in multi-currency finances and so a new ERP system was not an option.  The ultimate home for this application was a traditional hosting provider that had Sun Solaris capabilities. 

AWS Not So Successful Story #3 – Cost Overruns

This story isn’t specific to a particular client of ours but it’s the on-going complaint we hear about cost overruns on AWS. Vertically scaling applications, as well as auto-scaling configurations that are turned on without billing alerts, have led to some very high computing bills. 

Actually, this seems to be the biggest concern when using AWS. I don’t fault AWS for this,  the nature of technology is that there will be strange things that happen. An application that spins out of control can easily trip auto-scaling features to give a client a very high bill. Developers may spin up some test server instances and then forget about them for months before catching it on the bill. This is where

Here’s an excerpt with Corey Quinn (AWS expert) on the Cloudcast Podcast speaking to the issue of AWS bills in general:[Excerpt] The Cloudcast #305 – Last Week in AWSAaron Delp and Brian Gracely 

It’s a bit unfair for me to take Corey’s interview out of context so check out the entire podcast on your own


I’ve tried to outline some of the successes and failures we’ve seen on AWS. These examples can be made for any public cloud including Azure and Google Compute. I think the best advice to heed is something I read on Reddit from user “sithadmin” when he says the following, “If you’re moving to AWS without actually having workloads suited for it, prepare to start burning wads of cash without noticeable improvements in performance or reliability. Doubly so if you’re not right-sizing your VM.”

Continue reading “Should I Move to Amazon AWS?”

Tech media sometimes misses the mark when it comes to cloud computing. Many articles describe positive transformation of their IT due to cloud migrations. These stories have encouraged others to migrate to the cloud when most of them should not be moving at all. Due to scaling incompatibilities, these firms have seen higher costs and lower performance. While there are successes, news organizations should be balancing the discussion with articles where cloud doesn’t work out. I hope this article will do a good job of explaining why many firms cannot make the move to cloud at this point. I consult on cloud and colocation projects, so I don’t gain by speaking ill of the cloud. It’s my hope that this article will be an impartial account of real world projects I’ve worked on.

With complicated issues in tech news, writers want to simplify matters of cloud for easy consumption. These articles cater to what’s popular and don’t outline who is able to gain from the cloud. The resulting articles aren’t very helpful. Here are some article titles I’ve seen:

  • Public vs Private Cloud
  • Top 7 tips on how to migrate into the cloud
  • Should you use AWS, Google or Azure for your Cloud services?
  • Top Storage Providers You Should Consider
  • Company X migrates to the cloud, reduces CapEx.

For many legacy applications, the “public vs private cloud” discussion is premature. If your application doesn’t fit, there’s no discussion of public vs private cloud to be had. It should be a discussion of how these systems scale and will they fit into typical cloud models at all. We’ve seen a 2-3X increase in costs when legacy systems are moved into the cloud without re-coding.

So born on cloud applications thrive on the cloud because there is no need to adapt the code at all. They scale horizontally and are container based. Startups have applications that fit into the cloud because they use modern coding. They scale horizontally and have the ability to give back unused computing cycles. This article doesn’t refer to those “born on cloud” applications. This article addresses legacy applications that still run our larger enterprises.

Who Are You Talking About in This Article Then?

Here are the firms that this article refers to:

  • Enterprises with ERP, Messaging, CRM and 3-tier web applications.
  • Software as a Service firms. Firms who provide their application as a subscription to other firms.

Neither of these examples are able to realize the economies of scale of the cloud. I hope this article guides those wondering “is there something out there better for me?”.

So Why Are You Down on Cloud?

The cloud is a great way to deploy complicated systems at a fraction of their previous cost if your app fits. But I have to share the complaints I receive when speaking to folks researching the cloud. Here are the recurring subjects that come up:

  • (After getting quotes) Why is this going to cost more in the Cloud?
  • That virtual is almost more expensive than a physical machine, why is that?

Once migrated (without guidance), I hear these complaints:

  • We are spending more in the cloud than we did with our old environment
  • Our systems run slower in the cloud than they did with old physical servers
  • We started out cheaper but within a few months, it was more expensive when we boosted resources due to poor performance.
  • The tools we have access to are excellent but it comes at a high cost.

So how is it these firms ended up in this situation?

Let’s Blame The Media and Marketers

I don’t blame cloud providers for using words like auto-scaling, elastic, and dynamic. For apps using modern architectures, this is a possibility. Most enterprises do not scale in a manner that fits the cloud well. Here’s what these marketers aren’t talking about when it comes to your legacy application:

  • Auto-scaling works but usually only to increase resources (not decrease).
  • A Microsoft SQL 2016 Clustered database has large needs. You cannot “turn down” RAM from 128GB to 2GB because fewer people are using the application.
  • A heavy cloud virtual instance can cost more than a physical server with similar specs.

I expect these service providers to be upfront about these limitations. If not the service providers then the journalists who cover the cloud should be. I have read a couple of stories in the mainstream press documenting client who didn’t realize the panacea of Cloud with lower costs and better performance who reverted back to physical systems in colocation space which offered them better visibility and control of their costs. 

Let’s Blame Online Forums and Tech Communities

Many get their information from posts in online forums. In reviewing forums online, I was able to understand some of the problem. Here are 2 groups of people speaking about the cloud from different perspectives:

  • technical people who implement
  • business people / excited by the prospects of cloud


Technical folks share their experiences to help others trying to do similar tasks. When they speak about cloud they outline the tool set and the automation which they really love. Networking, IP allocation, load balancing and cloning server instances are easier. Their posts are academic and written in a way that documents their experiences. They tend to solve technical issues and aren’t speaking of fiscal feasibility.


Non-technical people love the promise of the cloud but don’t deploy applications themselves. Excited about this booming industry, they advocate for modernization and streamlining. “You are old school if you aren’t in the cloud by now!” is an actual quote from a Linkedin forum post. Unfortunately, they aren’t able to see the underlying issues I am speaking of in this article. Missing from their discussions are actual implementation experiences.

The Way We Thought the Cloud Should Work

Let’s take a utility like electricity. You consume electricity and it’s measured in kilowatt hours. Thanks to Save on Energy for the example I’ll use below:

Let’s say your refrigerator uses 300 watts to run. Let’s compute kWh in a months time for this refrigerator:

  • 300 watts X 24 hours = 7,200 watt-hours per day
  • 7,200 watt-hours per day / 1000 = 7.2 kWh per day
  • 7.2 kWh per day X 30 days = 216 kWh per month
  • 216 kWh per month x $0.10 per kWh = $21.60 per month

This refrigerator will always use 300 watts and always cost about $21.60 per month. The refrigerator cannot detect that it’s empty and reduce cooling to accommodate. That’s a bit like how legacy applications work. There is a base resource level needed to run something like an MS-SQL database even if it has 1 user connected.

A light fixture is a better representation of cloud consumption. Need the light? Turn it on and the billing starts. The moment you turn it off, billing stops.

So we envision cloud computing as a resource like electricity. The trick is that Cloud Computing requires 4 resources:

  • Processing in GHz
  • Memory (RAM)
  • Storage
  • Bandwidth

Cloud providers bill a usage rate per hour for each of these resources for a total of 720 hours per month. Most of our enterprise applications need a minimum configuration of RAM and CPU at all times (think refrigerator). Dialing back a Microsoft SQL database server to 1 vCPU and 1GB of RAM could crash that server. These servers cannot give back resources, so they need consistent levels of RAM and CPU.

So Think of Cloud this Way:

Your Refrigerator is like a legacy app. It will pull 300 watts almost all the time even you have no food in it. The only way to save money on electricity with a fridge would be to completely unplug it. Unplugging a database server from the environment means the application is unusable.

Your lights are similar to modern, horizontally scaling applications. Turn them on and off, when off you pay nothing.

Netflix Runs in the Cloud So it’s Good Enough For Us!

Using Netflix as an example is interesting. Netflix has built an ecosystem around Amazon Web Services and they are thriving. They’ve also made a decision to stay out of the business of infrastructure even at a premium cost. Had they aimed for the best cost per compute unit they would have built their own datacenters. But, they weren’t aiming for the lowest cost, they wanted amazing software, delivery, and scaling. Avoiding infrastructure management allowed Netflix to maintain its lead over competitors many argue.

So What Does An Enterprise Do?

The key will be to understand how your application scales and how pricing is computed. Here are important things to consider before taking the plunge into cloud computing:

Can I load balance smaller servers rather than having larger servers?

[why – you will find that many 1CPU+1GB virtual instances will be cheaper than 4CPU+4GB server instances]

Can a virtual instance provide the IOPS needed for the database layer of our application?

[why – having virtual database instances that have 8 vCPU and 64GB of RAM don’t perform as well as a physical server with similar specs.]

It’s important to understand how the provider scales and prices computing. It would be best to use their calculator to develop pricing models for your app as it exists now and later.

Depending on the way your application scales it may work in the popular public clouds. Having high CPU or RAM needs puts your environment into a category that may not fit well in the cloud. For these cases, traditional service providers offering a mix of physical and virtual instances are ideal.

Should you Modernize Your Application?

It’s hard to say. If your current systems are solid and perform well, the justification to recode is hard to make. While many would love to modernize their current code it’s not practical for most. Millions in investments have gone into the development and integration of existing applications for many enterprises.

New application projects can be build using modern methods and integrated. A slow transformation to modern code is the approach rather than a complete overhaul.

Self Assess for Cloud Readiness

You can self-assess your environment for cloud readiness. Here are some questions to ask:

  • Can your database cluster with many instances that use light servers and sharding?
  • Can you use a database as a service offering online? (Here are some of the more notable DBaaS‘)
  • Can you have your application scale back when utilization is low? (On its own).
  • Can you introduce containers, have orchestration or serverless technology?
  • Can use servers that need less than 8GB of RAM?
  • Can you split up tasks to servers that how low CPU and RAM needs but have those tasks load balanced over an array of small instances?

You can ease your way into the cloud if you can perform some of these cloud readiness tasks. If this is difficult it will be time to find a specialized cloud that can mix physical and virtual servers. At this time, many of the popular public clouds cannot give you actual physical devices. They can give an equivalent in vCPU and RAM but this doesn’t guarantee IOP performance.

We hope that we’ve given you enough information to continue this journey. You understand your application best. 

Continue reading “For Now The Cloud Has Failed Us”