Gree­n data centers are important in our compute­r-driven world. But what are they? These advanced facilities are de­signed to be eco-frie­ndly. Using efficient tech and sustainable­ methods lets them e­arn their “green” title­. Data centers help run our digital world, so improving them does matter. Better ope­rations save resources and money; this proves that data centers can be green and cost-efficie­nt, too.

With a wider view, sustainability in data centers is changing big time. They now use gre­en energy and smart cooling to re­duce harm to the environment. But here’s the main point: data centers now need to host Artificial intelligence (AI) and Machine learning (ML) systems. These syste­ms use powerful graphics cards that require a lot of energy; this pushes re­gular data centers to their limits.

The point of this blog? To show why green data cente­rs matter so much. As more people­ use AI and ML, green data ce­nters can meet this de­mand in a way that’s good for both our planet and our pocket. This smart move sets the stage for an efficient, eco-friendly digital future.

Why Do We Ne­ed Green Data Cente­rs?

  • Increased Use of Ene­rgy & Environmental Impact 

Data centers are using more and more energy. This problem needs fixing fast. Facts show prices for a kilowatt-hour (KWH) are­ going up worldwide. We can see­ this in Virginia, where the cost pe­r KWH has gone from a fair $0.07 in 2010 to a hefty $0.14 now. (source:  https://www.energybot.com/electricity-rates-by-state.html ) This jump in energy costs shows that we need more resources.

This problem is about more than just money. It also affects our environment. Data ce­nters use a lot of energy and produce a lot of carbon. This adds to pollution. The carbon emissions of data centers contribute significantly to a ruined environment situation – and this is a major concern for people in today’s eco-conscious era. 

  • Cost Saving & Long-Term Financial Benefits:

Gree­n data center improvement is not only eco-friendly but also cuts costs. This is crucial for our changing computer world.

Lowe­r Costs: Today, the price of power is high, which is tough for data ce­nters. Moving to green e­nergy can be expe­nsive at first, but it is a smart choice. Traditional power sources are overworked and becoming more expensive. This shift won’t pay off right away, but in the end, it is definitely worth it.

Financial Benefits in the Future: Going green helps data centers save money in the long run. These gre­en-friendly cente­rs cut down on high operating costs and guard against rising power prices. Data ce­nters that look forward not only save money but lead the way in an industry going green.

Being smart with money is driving this gree­n change. By wisely putting money into e­co-friendly practices, data cente­rs guard their financial future. They also play a role­ in a greener and more trusted computing future.

Key Strategies for Green Data Center Optimization

Energy-Efficient Machine­s

Green data centers ban energy-efficie­nt hardware, which is vital for the future of compute­rs.

1. Effective Servers: Green cente­rs use smart servers. They use less power, but the productive output is more than you can ever expect. This lowers power use a lot, which is a major key to staying green.

2. Blade Se­rvers and Virtualization: Blade serve­rs and virtualization are helpful as they are energy smart. They use resources well, lower power use, and enhance data center performance. Virtualization lets one physical serve­r run several virtual serve­rs, which lessens the need for many machines and cuts down on power use­.

Energy-efficient hardware­ is key to why green data ce­nters matter for the future of computing. By using great servers, blade­ servers, and virtualization, data cente­rs improve their reputation as a credible source for data storage & assist in creating a greater, more efficient computer world.

Air Control and Cooling Systems

Cooling systems and airflow control are key parts of e­co-friendly data center improvements. They’re smart solutions for the next steps in computing.

1. Hot Row/Cold Row Layout: Eco-friendly data ce­nters use a cleve­r design with hot row/cold row layouts. This plan separates warm and cool air, making cooling more effective. Cold air goe­s to server inputs, while hot air is pushe­d away. This way, data centers use less power, save on cooling, and boost their effectiveness.

2. Fre­e Cooling: Green data ce­nters use free­ cooling when they can. They use cool outside air to chill the data center instead of power-hungry machine-based syste­ms. It’s a better option for the environment and uses less energy all in all.

3. Strategic Location: Data cente­rs are choosing spots where the­y can use outside air for cooling more ofte­n. Choosing spots with cooler climates lesse­ns the need for machine­-based coolers, which helps save power and lowers carbon footprints.

These cooling and air control methods highlight how economical and credible e­co-friendly data centers are­. By using these methods, data ce­nters make computing more sustainable­ and efficient, focusing on sensible­ financial choices and care for the environment.

Using Rene­wable Energy

Gree­n data centers are starting to use renewable energy, which is a smart move!

1. Using Solar and Wind Power: Green data ce­nters get power from the sun and wind by using solar panels and wind turbine­s. This changes sunlight and wind into electricity. Using these resources, they use less non-rene­wable energy and reduce their carbon footprint. This shows that gree­n data centers are smart and want to keep our world clean.

2. Buying Energy From third party: Green data cente­rs have access to the Power Purchase Agre­ements (PPAs). These deals let them buy re­newable energy from other places. The price­ is often set in stone. By using these deals, they always have a source of cle­an energy. GDC operations do not care about money in this case, and thus, it is a win-win!

Renewable energy is a turning point for optimizing the appetite of green data centers. They also use PPAs to show that gree­n data centers are smart and trustworthy. RE is ultimately health-friendly for the environment and cheape­r in comparison – which optimizes the operational costs as well.

Efficient Lights and Buildings

Gre­en data center strategies rely on efficient lighting and smarter building design. It is a major reason behind shaping the health of green data centers.

  1. LED Lights: LED lights are a part of e­nergy-saving data centers. They last longer and use less energy than old lights, saving power. They also create less heat and lesse­n the burden on cooling systems.
  2. Gre­en Building Materials: Gree­n data centers use e­arth-friendly materials and smart designs. This allows for natural cooling, ventilation, and strategic locations in cooler place­s. This cuts the need for high-e­nergy cooling systems, making data cente­r operations more sustainable and smart.

These practices underscore the importance of intelligent buildings and good lighting options to sustain a data center’s health – which, in the end, leads to environmentally responsible operations. 

Data Center Infrastructure Management

  • Data Center Infrastructure Management, or DCIM, is vital for running a green data center smoothly. It helps keep performance high, use resources effectively, and foster sustainability.
  • Predictive Analytics for Resource Allocation: DCIM utilizes predictive analytics for correct resource allocation predictions. It stops ove­r usage or underusage of resources, leading to improved performance and efficiency.
  • Energy Consumption Dashboards: Real-time dashboards for e­nergy consumption are part of DCIM. They provide a clear picture of energy usage to managers. This helps spot ine­fficiencies and optimize e­nergy use, which lowers costs.
  • Comparative­ Reports: DCIM includes benchmarking and re­porting to compare against industry norms. It keeps data ce­nters efficient and sustainable­ by aiming to match or outdo best standards.
  • High-Efficiency Power Backup: DCIM he­lps set up high-efficiency Uninterruptible Power Supply (UPS) systems in data ce­nters. These syste­ms save data during power cuts and reduce energy waste, boosting re­liability.
  • Best Practices for Energy Distribution: DCIM assists data ce­nters in using the best energy distribution practices, optimizing power supply to equipment. This ensure­s efficient use of resources and reduces e­nergy wastage.
  • Balance in Data Ce­nters: DCIM helps data cente­rs weigh betwee­n excess and efficiency. Finding the right mix keeps things running smoothly and optimizes resources smartly.

If you are curious about how Data Center Infrastructure Management could work for your business – we are open to talk! DCIM is a great approach to managing your green data centers and their smooth operations.

Endnote

Green data centers – without a doubt, are the future of computing and data management as a whole. With the need to host power-hungry AI and ML systems – The Green Data Center systems will make the data world a better place to operate in. To make data ce­nters greene­r, we need to use­ energy-efficie­nt hardware, cooling, and renewable­ energy. But as all changes, the transition to a green data center can be hard – and this is where a consultant comes in.

A Data management consultant has a lot of knowledge and experience. The­y can make sure gree­n data centers are good for the­ environment and the walle­t. They find the right balance, use predictive analytics, and make the most of renewable energy. In short, green data ce­nters are important for a clean and e­fficient digital future. And consulting a seasoned management consultant is a smart first step for a sustainable­ and money-wise computing future.

Edge computing is rapidly advancing and becoming increasingly sophisticated. As it continues to evolve, it’s set to have a significant impact on data centers. Let’s take a closer look at five noteworthy trends on the horizon:

Data Centers Transform into Dedicated 5G Providers for Enterprises

When we talk about edge computing, it’s impossible to ignore its close relationship with the 5G network. Edge computing plays a crucial role in supporting data-intensive applications like artificial intelligence (AI) and the Internet of Things (IoT). With the advent of 5G, these technologies are poised for even greater heights.

Engineers are currently hard at work integrating telecommunications capabilities into edge data centers. In the not-so-distant future, we might witness these facilities becoming dedicated 5G providers for businesses. This solution could empower telecommunications providers to embed data centers into customer-facing cloud environments by leveraging technologies like Kubernetes containers. This could also open the door for enterprises to offer personalized 5G access to their employees and guests.

Growing Computing Demands Fuel the Need for Mobile Data Centers

Edge computing brings data processing closer to the source, reducing the distance information needs to travel. Much like how Wi-Fi beamforming enhances the user experience with stronger, faster, and longer-range signals, edge computing delivers better performance through reduced latency. Some companies are even exploring the concept of mobile data centers to bring computing closer to users.

These mobile data centers typically fit into shipping containers or are compact enough to be towed on trailers. While data center providers strategically select their locations, they can’t always be where their customers need them. Mobile data centers could step in, particularly for large-scale events like the Olympics or the Super Bowl, where technology-driven experiences and interactive apps are the norms. They could also address connectivity challenges in remote areas.

Tackling Growing Infrastructure Demands with Edge Computing

Many companies adopting increasingly resource-intensive computing applications soon realize that their existing data center infrastructure falls short. Here, edge computing comes to the rescue, offering a solution to current and anticipated challenges. Edge computing achieves “cloud offload” by reducing the network traffic traveling to and from the cloud.

With edge computing, businesses can extract value from data faster as it’s processed closer to its source. Additionally, companies have the option to reduce data size before sending it to the cloud. These advantages make a compelling case for relying on edge infrastructure, particularly when traditional data centers are stretched to their limits.

Edge Computing Aligns with Sustainability Goals

Alan Conboy of Scale Computing highlights how technologies like edge computing can significantly reduce the environmental footprint of IT operations. With compact “data center in a box” solutions that consume less power for operation and cooling than a single refrigerator, businesses can enhance their green efficiencies. Collectively, these small changes can lead to substantial environmental benefits. Mobile edge data centers, in particular, hold promise in supporting corporate sustainability objectives.

Robotic Security Guards for Edge Data Centers

Advancements in AI and the IoT have diminished the need for extensive human staffing within data centers. While humans still play a role in daily operations, their involvement often revolves around confirming recommendations made by advanced management platforms. In this context, Switch, a technology infrastructure specialist, is exploring the deployment of robots to patrol edge data centers and other mission-critical facilities.

These robots boast features like bullet-resistant exteriors and night-vision capabilities, making them ideal for perimeter security. Their presence could prove invaluable, especially as some data center providers are moving towards smaller on-site teams. Robots can swiftly detect and report issues, prompting human intervention when necessary.

Connecting the Dots

The relationship between data centers and edge computing is a recurring theme in discussions about the future of technology. This overview of emerging trends sheds light on why these two topics are so closely intertwined. One thing is clear: analysts and tech enthusiasts should closely monitor these developments, as they often shape and influence each other.

We live in a world where information is the most valuable commodity. Typically, information is stored in the form of data on the internet. Companies are constantly using, collecting, and analyzing this data to make decisions. Because of this, it is crucial for businesses to protect their customers and their own interests by securing their data.

Your business relies heavily on proper data storage, security, and handling, so it’s crucial you find the right place to house it all. The right data center should have the highest security protocols and the latest technology. It should also offer scalability, so your data can grow as your business does. Finally, the data center should be reliable and offer fast access to data when needed.

 

Today’s business world is powered by data centers, providing essential computing and storage services. They are the backbone of digital operations, ensuring businesses run smoothly and securely. Data centers are becoming increasingly vital for businesses of all sizes.

 

A business’s decision to choose a data center or colocation provider is paramount.

Still unsure if colocation is the right fit for your growing business and how to pick the right Data Center? You no longer have to worry!

 

We have outlined seven requirements here that are not intended to be a definitive list, but they are an excellent starting point.

 

Let’s buckle up!

1. Location

Choosing a data center depends heavily on its location. Data centers should be located in areas with reliable access to power and the internet, as well as nearby transportation infrastructure. Additionally, data centers must be in a secure location with good physical security measures and safeguards. It is important to consider how susceptible the area is to natural disasters like tsunamis, floods, hurricanes, and earthquakes.

2. Reliability

Downtime is a bad thing for businesses. If you are looking for a data center or colocation provider, reliability is essential. As far as data centers are concerned, reliability is measured by uptime. It is recommended that a reliable provider has five nines of uptime, meaning reliable at least 99.999% of the time.

3. Security

Your data is crucial to your business. A data center must have a suitable security system since it holds all of your corporate data and applications. Because the average cost of a cyber assault on a data center is $4 million, the availability of a DDoS Protection service should be utilized. To safeguard assets, data centers should employ software and technology, but they should also have robust physical security, such as locks, cameras, and on-site security officers. They should also ensure that its security features and objectives do not impede the service’s scalability.

4. Connectivity

It is imperative for data centers to have interconnections with the right providers. Interconnections can add great value to your business. With this, you can connect to a wide variety of major bandwidth providers, ensuring faster and more reliable performance for your business’s servers. Interconnections allow for low latency and a high-performance network, as well as access to multiple providers. This provides more redundancy, scalability, and flexibility. It also means that you can quickly adjust your network needs to accommodate sudden changes in demand.

5. Power and Cooling

In the event of brownouts or blackouts, power, and cooling are the primary resources needed. With high bandwidth connectivity and backup generators, these can occur at any time. In order to improve their energy efficiency, many data centers use carbon-neutral energy in order to reduce the impact of their carbon footprint on the environment.

6. Support

The support provided by the data center is important for businesses that do not have a dedicated IT team. A responsive and knowledgeable support team can help troubleshoot issues and answer questions, reducing the risk of downtime and improving the efficiency of your business operations. 24/7 support is especially important as it ensures that you can get help whenever you need it.

7. Scalability

The expansion of a business requires a prepared plan to deal with the challenges that will arise in the future. Your business needs may change over time, so it’s important to choose a data center that can accommodate your future growth. Look for a data center that offers scalable solutions, such as additional rack space or bandwidth.

Make the right Decision

In conclusion, choosing the perfect data center colocation for your business is a critical decision that requires careful consideration of several factors. Above all, every aspect matters in ensuring that your business data is safe, available, and easily accessible. Therefore, it’s essential to seek the guidance of an expert Coloadvisor who has vast experience in the industry and can help you make informed decisions. With their expertise, you can rest assured that you will find the right data center colocation that suits your business needs, budget, and growth plans. So, don’t hesitate to engage an expert 515 Engine to help you navigate the complex world of data center colocation and take your business to the next level.

Data is considered to be the new oil and is now virtually valued more than oil. Data is digitally synonymous with crude oil because it is needed as a fuel and to run businesses around the world just like crude oil is essential as a fuel to power machines and vehicles. But there is a stark difference between the two. The quantity of crude oil has been on a constant decline since the beginning of its usage, whereas digital data has been increasing in quantity every single day. 

If companies don’t make wise use of their data then their businesses might just witness a drastic drop in performance. The wise usage includes every aspect of interacting with data, i.e. storage, high computing power, good connectivity of servers for smooth transfer of data, and skilled professionals working on data for analysis. The more the data increased by every passing year, the more the need of handling that data increased. 

Back in the day before the Internet and even a few years after it, connectivity among people around the world was minimal and so was the amount of data. Fast forwarding to this day, with the rise and evolution of technology and the reliance of businesses on the Internet for connectivity and reach, the amount of data that emerges with it has never been larger.

Over the years, to manage this large amount of data, we have come up with storage capacities, processing power, and network connectivity that became better incrementally with time. We have reached the point where we can store huge data and run computations on it in a different location and can access it from elsewhere. 

In this day and age, there are two major storage and computing services apart from hosting in-house storage servers, i.e. cloud services and data centers. One might ask what would companies prefer between the both. The answer is, it depends. Both cloud platforms and data centers provide almost similar features such as storage, computing, networking, and analytics. Cloud platforms provide these facilities on a pay-as-you-go pricing by putting a veil on the hardware and infrastructure underlying them. Some companies do not need access to the hardware and other infrastructure to carry out their tasks but those who do need them with additional flexibility, customization, and ownership might opt for data centers. 

The following are some important types of data centers:

  • Enterprise Data Centres: They are developed and owned by an organization mostly for its employees.
  • Colocation Data Centres: The space and resources of a data center available for other people to rent are known as a colocation data center.
  • Hyperscale Data Centres: Data Centers built especially for processing a large amount of data and maintaining scalable applications.

With the advent of versatile cloud services provided by companies like Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and many more, one might say it will give data centers a run for their money. But many qualities overpower those of cloud services such as decreased latency, higher levels of performance, increased levels of security, increased levels of control, and data sovereignty.

Due to this reason, the data center industry is predicted to expand over the coming years. 

As per the report by Statista Market Forecast, the Data Center industry in the United States is predicted to grow by 4.12% by 2027 resulting in a market volume of US$117.50 Billion in 2027.

Considering these aspects, let’s take a look at some of the data center trends to watch out for in 2023:

Fluctuating maintainability and the rise of green data centers

Infrastructure inside the data centers needs a significant amount of resources to maintain it properly. Cooling of the servers, generating constant power to keep the servers running, and other maintenance costs. According to research by Ernst and Young in December 2022, the data center industry contributes to 4% of greenhouse emissions.

Since environmental conditions have gone downhill due to increased global warming, governments are enforcing the use of sustainable sources of energy to generate electricity to protect the environment. While these sustainable sources could power smaller data centers with less amount of data to handle, they are not easily able to power hyper-scale data centers. If one has to use solar power to generate electricity for hyper-scale data centers then it would require solar panels covering multiple football fields which may or may not be cost-effective for some countries.

In 2023, it is predicted that as governments around the world are encouraging the usage of renewable sources of energy, they will increase the budget for it and it may be beneficial for data centers around the world.

Increased need for better hardware and infrastructure

With the exponential advancement of technologies like Artificial Intelligence and Machine Learning, providers of IT solutions around the world are required to keep up with the game, and data centers are no exception. They need better and more robust hardware and infrastructure than ever before to handle the complex computations that come with implementing AI and ML algorithms and processing big data. They need better-cooling systems, better cabling and switch network,s and expert IT professionals for troubleshooting. Earlier, this kind of processing power was only seen in the infrastructure of massive research institutes but now it has become so commonplace at data centers, especially the hyper-scale ones.

Small-scale data centers will increase in number

With the rise of 5G technology in the telecom industry, companies strive for the lowest amount of latency for their customers. Since data centers provide decentralized support for computing and memory services, more data centers are required now than ever. Here, small-scale data centers come into the picture. These facilities will now be located at different locations locally where data is being generated.

Need for better cybersecurity

With more companies relying on data centers to conduct their businesses, there comes a dire need to increase the security of the data. Due to the nature of infrastructure in data centers where multiple software and technologies are used, it is not difficult for malicious hackers to find loopholes and vulnerabilities. This can leave businesses of thousands of companies at risk. Solutions like prevention of network-based exploitation using intrusion prevention systems, deploying system firewalls for small and large-scale applications, etc. will be used to protect data centers from security threats.

Colossal IT industries such as Google, Microsoft, or Meta (Facebook) have trillions of gigabytes worth of data to be worked upon. Loss of this pivotal information or an anomaly of data can’t be endured. Therefore they built their personal data centers to keep their data intact and encrypted.

Data Center is an infrastructure that companies use to store, manipulate and regulate their data for confidentiality and security.

data-center-equipment

A data center is outfitted using IT infrastructure, electrical equipment, mechanical instruments, and environmental facilities. It is guarded with physical security such as fencing/ iris scanning, fire detection, and suppression. It also uses software called DCIM ( Data Center Infrastructure Management) which provides unified administrative facilities.

Table-Data-Center-Equipment

Now let’s understand the types of data centers.

data-center

Types of Data Centers

Data centers are differentiated according to the infrastructure and requirements. If you’re looking for a data center, analyze what you expect based on type.

  • Enterprise Data Centers

The first and foremost type is the traditional enterprise data center. It provides private facilities built, owned, and operated by a company for personal use. Most of the data of IT companies is stored on their purchased devices. These data centers can be on-premises or in nearby locations for maintenance purposes.

  • Managed Service Data Centers

Managed data centers are third-party facility providers who rent out storage infrastructure, networking facilities, and maintenance of their equipment. Top MSPs ( Managed Service Providers) are IBM, Accenture, HCL, Infosys, Wipro, etc. which support services for IT companies to expand their businesses.

  • Colocation Data Centers

Colocation also known as ‘Colo’ is the data center colocation where only White Space ( Hardware related to storage or network) is rented out by individual companies for their specific requirements. Their hardware has a separate location in the building from other data racks. It is quite cheaper than MSPs as only hardware is rented and not the whole infrastructure. Thus it is trending for small IT industries nowadays.

  • Cloud Data Centers

An external or Off-premises data center where users can purchase services and storage facilities through a virtual infrastructure. The crucial difference between an MSP and a cloud provider is that a cloud data center only offers services on a subscription basis. Companies rent Cloud services so they don’t have to pay for any hardware. Popular cloud service providers such as AWS, Google Cloud, Microsoft Cloud, IBM, and Salesforce offer the virtual infrastructure.

  • Edge Data Centers

Edge data centers are devices (highly configured PCs) placed on the premises. They sit close to the applications of the user and can deliver services and content with minimal latency. The reason for keeping the servers closer is that they are used as backups in case of main server failure.

Considering the various categories of data centers and their facilities, people might get confused about which one is the best fit for them. 515 Engine plays a key role in clearing up their perplexity. Thus let’s pay attention to why IT companies are looking for a data center.

Why do you need a data center?

  • Abundant infrastructure to manage data.
  • Adequate space for storage purposes.
  • Sufficient dedicated employees.
  • Unravels data security issues.
  • Serves networking facilities.
  • High scalability of data.
  • Maximum coherence in communication.
  • Resistance to power failure.

Hence companies reach out to third-party services such as AWS or Azure for rental data center facilities.

Instead of wasting hours online looking for a data center, ColoAdvisor makes this process 10x faster by using our handpicked list of all the data centers in the USA. Just fill up your specifications about cabinets, power requirements, desired location, and bandwidth needed, and in a fraction of a second, receive multiple quotes which are apt only for your requirements.

Get rid of the headache of drafting emails to the service providers, that might not even be acknowledged by simply quick scanning your quotes. That’s how your procedure gets really cinch. Still, if you feel any queries related to data centers or Coloadvisor you can always reach out to us at https://www.515engine.com/contact/

Introduction

I’d like to talk about colocation pricing. In this brief time, I’d like to cover:

  • how data center colocation pricing works
  • why data centers vary in price
  • and real world pricing examples

We can only imagine the curiosity a IT director faces as they begin to outgrow their in house data center or realize that their new building lease has no suitable space for a good server room.

Today our goal is to put some clarity around the costs associated with colocation. From a smaller regional providers to notable name brands. If you’d like a high level overview of what colocation is, check our our article about colocation here.

chart - colocation pricing and data center components in vertical chart

Let’s Start With – the Components of Colocation Pricing

There are several components, In no particular order they are space, power, cooling, bandwidth, and cross connects. We’ll begin

Space

There are several ways space is sold in a building based data center. I am not going to cover containers or other portable data center footprints. Here are some of the options

  1. By the U (a unit of measurement that is about 1.75 inches or 44.45 mm
  2. 1/4, 1/3, 1/2, or Full Cabinet / Rack
  3. Preconfigured Caged Spaces with Racks inside OR
  4. Preconfigured secure rooms that are almost completely isolated from the rest of the facility
common power options

Moving on to Power and Colocation Services Pricing

This comes several different ways and it can be one of the more confusing aspects of colocation pricing. When you are speaking about Racks you have:

  • 20A/120V power ~ 2.0 KW
  • 30A/120V power ~ 2.8 KW
  • 20A/208V power ~ 3.3 KW
  • 30A/208V power ~ 5 KW
  • 50A/208V power ~ 8.3 KW
  • 60A/208V power ~ 10 KW

This can go way higher, up to 60 or even 80KW per cabinet depending on the datacenter. There’s also a 3 phase option for most data centers.

Here’s how you calculate kilowatts:

Take your amperage and multiply by volts. But, I only use 80% in my calculation as you aren’t supposed to use the entire circuit as you can trip a breaker. So for example:

  • 30A x 208 = 6240 BUT

80% of 30 amps is only 24AMPS. So let’s redo the formula:

  • 24A x 208v = 4992, round up to 5000 and you have 5000 watts. To represent 5KW, you just move the decimal 3 places to the left. 5 KW.

Also, These circuits can be delivered as single circuits or as primary and redundant circuits. Redundant circuits double the capacity with the understanding that you’ll only use 1/2 since it’s meant as a failover only.

10-Buying-1-MW-of-Colocation-Space

Let’s talk hyper scale for a moment…

For hyper scale sized data center use, it’s not unusual to purchase space using Kilowatts or Megawatts of power. So ordering 1 MW of data center space would make the main metric the energy needed, and the facility would literally have a spec which included the circuits, cabinets, cooling and space to accommodate that power draw. This is also referred to as wholesale colocation

Let’s Talk Cooling

Cooling is usually included as a component of power pricing unless your requirements for your power are really high. Most facilities can offer 10 KW cabinets at a standard price. Increasing this to 20 or even 60KW can incur higher cabinet pricing which is mainly to cover the cooling aspects of the service.

Bandwidth & Cross Connects

Bandwidth and cross connects show up in every data center quote. There are 2 ways I typically see this happening.

A client will establish bandwidth like this:

  • purchases 1 or 2 cross connects from the data center THEN
  • purchases bandwidth from 1 or 2 different providers for failover.

OR

Some data centers can price out their own blended bandwidth that consists of 2 or more internet service providers. This allows the bandwidth provider to quickly deliver bandwidth services to a colocation client without the wait and without the complexity of setting up failover via BGP and it’s also easier than signing 2 contracts. One for colocation and one for bandwidth

Moving on.

Pricing and Data Center Margins

Data Center space is derived like any other product and is costed using typical accounting models for overhead and margin with some variances. Buying data center space as a provider isn’t a known quantity and there are many ways to start a data center operation such as:

  • purchasing (financially) distressed or foreclosed data centers
  • pouring concrete and building a facility from scratch
  • purchasing and retrofitting existing buildings to be data centers

The point here is the initial investment can vary widely unlike other businesses and franchises with more predicable startup costs. This investment made by the data center provider has to be reflected in their pricing to end users.

Let me talk about the

the colocation pricing dilemma

The Power vs Space Dilemma

In addition to the startup costs of building a data center, there are other pricing issues that product management have to avoid when pricing things out such AS

  • Selling all their space BUT having plenty of power capacity left over (which means selling lots of low density space was the issue) OR
  • Selling all their power BUT having plenty of space left that cannot be sold (selling lots of high density space is what causes this) OR
  • Filling up the data center BUT they have slim profits after having filled the entire site (which means space was sold at too low of a cost)

I’ve been able to witness this miscalculation first hand. It always seems to go down the same way. A major online company needed lots of space and didn’t want to build their own data center. Not being able to resist the massive influx of revenue, the data center in question sells a deal that was high in power, low in space and was sold with slimmer margins. Once the space was sold, it reduced the power capacity to the rest of the data center to a point where there was plenty of square footage but no power left to sell in that square footage. From this point forward, each new client had to have their power requirements closely calculated and each additional power circuit that was sold could only be done so after an audit of available power and cooling. Some data centers are able to add cooling and power capacity but in many cases, local utility can be limited unless there are major capital expenses thrown at the issue

Real World Pricing Examples

Cabinet or Rack Pricing

Cabinets are usually sold in 1.75 inch increments which are referred to as a “U”. Many cabinets are 42U but there are smaller and taller ones based on the facility. For some high density and space efficient designs, it’s not unusual to have much taller cabinets. We’ve seen this cabinets run from $300 to $2000 per month based on region and cooling requirements.

Power pricing graphic of $500-$2000 per month for 30A/208v power which is 5 Kilowatts

Power Pricing

Power or Energy is really the driving factor for almost all things here.

1 x 30A/208V Power – Primary + Redundant Power tends to cost between $500-$600 per month. Most retail colocation assumes that you will consume the entire amount of power allocated to you which on a 30A circuit would be about 24 amps. Wholesale pricing for power is different.

Cross Connects

There is quite a variety of pricing from a single charge of $50 one time up to $350 monthly. While this is sometimes the cause of some pain, i.e. why are you charging me so much for a piece of fiber that runs 50 feet, data centers will often justify this by saying that the cross connect is treated as a mission critical circuit, is included in the service level agreements, is fully monitored with the responsibility for keeping it connected and secure. For some data centers, it’s a major stream of revenue justified by the multitude of network access one is given in that particular site. Access to almost all bandwidth providers can allow an infrastructure company grow quickly as all their important partners are just a cross connect away.

Internet colocation pricing for bandwidth sign

Bandwidth

What I call “in house blended bandwidth” I find varies greatly. Generally a Gbps to the internet will run between $500 and $1000 per month depending on the carrier.

Remote Hands

I’ve seen this fall into the range of $100 to $200 per hour. Data centers like to bill in 15 or 30 minute increments meaning that if you ask for a task to be done (say rebooting a server) and it only takes 5 minutes, they will round to the nearest 15 or 30 minute mark. Some data centers will offer better pricing if you commit to remote hands service monthly. That way, the data center can gauge their supply of on hand personnel and staff up to ensure timely support if you pay for it consistently on contract.

Finally…

Things Not Covered

My goal was to really outline data center pricing. There are some colocation facilities that offer the most basic space, power, cross connects and remote hands and nothing else. Other facilities can get pretty wild and deliver so many services on top of colocation, it starts to look like managed cloud. If you currently need to set up data center strategy we are able to reach out to multiple colocation vendors in a single shot, pull back pricing and present it to you in a very easy to read manner. This is a free service. Just look for a link below to start the process. https://www.515engine.com/contact/

As always, thanks for joining us this month and I look forward to seeing you on our next video.

Introduction

The text that follows is a summary of our video. This month we discuss the real technical differences between public cloud vs private cloud. If you’re interested in more details about the cloud, we are constantly adding to our What is Cloud Computing Library.

We Aren’t Discussing Public vs On Premise Cloud

Public-Cloud-vs-Private-Cloud
On Premise vs Public Cloud

When you search for diagrams for public vs private cloud, you tend to get more of a logical diagram that shows on-premise vs public cloud which we’ve drawn above in our own diagram. That depiction, while helpful, isn’t an accurate representation of the differences between public and private clouds.

Let’s start with the basics, what is a Hypervisor?

To kick things off, we need to address what a hypervisor is so let’s define it:

It is a computer on which a hypervisor runs one or more virtual machines and it’s also called a host machine, and each virtual machine is called a guest machine.

Courtesy of Wikipedia

Also, For the ease of discussion, I am going to use terms that are relevant to the VMWare ecosystem of virtualization as it’s just more relatable for most people. Please note, there are many commercial and open source hypervisors out there.

public cloud vs private cloud logical diagram of a hypervisor picture

Let’s set the stage. When building a cloud using hypervisors, you start with a server, and you don’t install windows or linux but you install VMWare vSphere. The vSphere is the hypervisor making this machine a HOST and allows you to add a bunch of various guest operating systems. They can be almost any operating system that runs on x86 architecture. So let’s walk through public cloud and how the hypervisor is situated. I’ll start off by by building groups of hypervisor machines, In the VMWare world, this is referred to as an ESX cluster.

ESX-Cluster

This cluster can absorb additional physical servers very easily which allows the resources of the new server to be allocated to the cluster as a pool of resources. The Virtual instances we use are spread amongst many servers throughout the racks and if one server goes down, the Virtual instances are spun up instantly on a different machine.

Public Cloud

Remembering that this example is for public cloud, look at how they sell VM instances. Their clients don’t really know about the infrastructure is behind the scenes. They don’t see the complexity of grouping hypervisor machines together. They just see the individual virtual machine instances that they purchase. They will typically purchase instances with some type of portal that allows them to add servers, CPU, RAM and storage. The client is only responsible for the actual VM instances and not the underlying infrastructure which is no simple feat to properly manage.

As far as billing, the clock starts when you spin up an instance, and they can be billed up to 720 hours per month. So in theory, you are mixed in with other firms on these massive ESX host farms which are logically separated. The networking between all of this is mainly software defined and the public cloud can add capacity simply by adding rows of servers and storage to keep some level of overhead above and beyond the forecasted client need.

Sample-Public-Cloud-Offering-Logical-Diagram
Sample Public Cloud Offering Logical Diagram

Public Cloud in Review:

  • Massive ESX clusters
  • Instances are in a community cloud.
  • Secure but Limitations on Custom Hardware

Learn about some of the limitations of public cloud in our Disadvantages of Cloud Video

In public cloud, you don’t control the hypervisor, you are renting instances on someone else’s hypervisor.

Switching to Private Cloud…

Keeping some of the terminology in mind, A cloud provider allocates to you 3 servers and builds an ESX cluster on it for you. Remember that would be 3 servers, with hypervisors on each, clustered in a way that all the resources of these machines are pooled. Additionally, they give you access to storage and network and now you allocate your VM instances to the limit of your cluster.

ESX-server-addition-of-more-RAM-and-CPU
Adding a 4th ESX Cluster Server to increase RAM and CPU by 25 units each

Let’s say you use 3 Servers for your cluster giving you the following capacity:

  • 100 vCPUs
  • 100 GBs RAM.

You can create 100 virtual servers each having 1 vCPU and 1 GB of RAM each. To grow, you can’t goto the service provider and ask for additional virtual machine instance (e.g. 1 CPU, 1 GB RAM), you will to add another dedicated server which is added to the ESX cluster. This gives you another bucket of resources from which you can add more VM instances with CPU and RAM.

When you grow, there’s a minimum step you will need to take, each at a substantial cost because you are buying 1 full server of compute even if you only want to add a single VM instance with 1GB of RAM and 1 vCPU.

bare-metal-servers-being-loaded-into-a-data-center-rack

What is Bare Metal Hosting?

With some hosting providers, you will see an offering referred to as Bare Metal. Bare Metal is where you are handed raw machines where you can add your own hypervisor layer and create your own ESX-like environment.

In this case, you are no longer relegated to just VMWare and you can look at other commercial or open source hypervisors like Linux KVM or Xen.

So in public cloud you are using a shared hypervisor layer managed by the hosting provider. In Private Cloud you are using a private hypervisor layer where it can be managed by either the service provider or the end user.

In the end, there are many exceptions to these rules. You’ll find tons of exceptions to everything I’ve said but those are the fundamentals that we’ve seen here at ColoAdvisor. In the end, it comes down to who manages the hypervisor and is it shared or dedicated.

For additional information on cloud computing, check out our What is Cloud Computing library and also check out Is Virtualization Needed for Cloud Computing. You can also reach out to us at anytime using our contact page.

Estimated reading time: 3 minutes

It’s All About the Density

One of my earliest clients (circa 2000) wanted to completely fill a standard colocation cabinet with as many servers as it could physically hold. While logical, this was a big issue for us as our data center at that time could only cool a single cabinet up to 2 kilowatts (KW). This meant that while physically it could fit 42 servers into a single cabinet, we didn’t have the cooling capacity to keep up with such heat in such a small space. Packing the cabinet this full, would not only cause the equipment in the cabinet to overheat, our data center would be too hot to host any other equipment even though we had plenty of physical space left.

So our ability to cool our cabinets looked like this:

20 amps @ 120v = ~ 2KW
What our client wanted to do
84 amps @ 120v = ~ 10KW

Welcome to The Low Density Era

For those that had small equipment which drew a lot of power, I sometimes had a hard time explaining that they’d need to buy an additional cabinet and split the servers between the 2 cabinets which left huge gaps intended to help with cooling. To make matters more interesting, blade servers were introduced which allowed much higher computing in a smaller footprint. As IT departments upgraded their equipment from traditional servers to Blade servers, I was called upon to “rescue” IT departments from these low density colocation facilities into facilities that would cool well past 2–4KW per cabinet.

The Work Arounds

At this point in history it was normal that a data center could cool 10KW but innovators and more modern facilities saw the trend for even higher computing and began to build 10KW and 15KW cabinet cooling capabilities. Older facilities needed complete retrofitting to get close to this or added drastic measures like in rack cooling, expansions in cooling equipment coupled with baffles, more porous floor files and a host of other stop gaps.

Today

Enter 2020. In our 515 Engine database, we have facilities that can cool above 60KW of power per cabinet in taller cabinets designed to pack in hardware. With this advanced cooling capability one could not expect that the per cabinet pricing would be the same as the 2–4KW cabinets of old. While not everyone can go up to 60KW of cooling, it seems that a norm has settled into the 10–20KW cabinet range.

In a recent project, our client was running closer to 6–8KW. They had a very specific technology stack that didn’t need blade servers. As I worked with reps of various colocation facilities, we had to purposely locate and quote data centers that were slightly older. While my client was ok paying for the 8KW of power they needed, I knew that putting them into a facility that was constructed to accommodate far greater densities meant they’d pay a premium. We ended finding a facility built to cool high density but not ultra high density power usage. The price per square foot in this space was substantially less than others.

With this strategy, the only risk for my client is if they change their technology stack to ultra high density they maybe forced to move. Since there are no plans to increase their density for quite some time, we thought this would be fine.

Finally

Having a perfect understanding of your power consumption by the rack is critical. It’s literally the first step we take when we engage with any of our clients. Depending on your hardware vendor, there are great power calculating tools out there.

Estimated reading time: 5 minutes

Mainframes? Well, for large organizations, federal and state government entities, Mainframes are still a thing!

For our most recent development project, we modernized a legacy application used by a professional association. It was a complete re-write of their application which we decided to develop in Google’s Firebase which is similar to AWS’ Lambda and falls into that serverless category of infrastructure. While the application is critical to our client, the concurrent user demand for it isn’t heavy so we estimate our costs for hosting this will be less than $20 per month. Having the ability to rewrite something from scratch allowed us to use serverless hosting but this is a luxury most enterprises cannot realize and serves as the inspiration for my post today.

As much as I’d like to brag about our coding skills, I’m using this opportunity to reflect on the idea that for every born on cloud application running right now, there are probably hundreds of applications that don’t fit the cloud. While not an exhaustive list of categories, here are several to think about:

  • Client-Server Applications (multiple thick clients connecting to an application + database server)
  • Web-based applications – 3 tier architecture, with Vertical scaling
  • IBM iSeries or IBM pSeries
  • Solaris on Sparc Based Systems
  • DEC Alpha running OpenVMS or Tru64
  • PDP running RSX-11M, RT11, RSTS
  • VAX running VMS
  • HP3000 running MPE/iX 7.5

With the cloud and its utility-based billing, I know executives starting questioning their IT departments to understand when they were going to save money and migrate their applications to the cloud. I also suspect that IT departments got tired of telling said executives that for the legacy applications they employ, that the cloud wasn’t an option for them.

Using a presentation shared by Cloudability at AWS Reinvent a few years back, I paraphrase what they said about the potential paths for moving legacy systems into the cloud:

Re-Host:

The movement of systems and servers into a different infrastructure provider as is. This may offer better pricing and easier scaling but it’s generally a lateral movement with some marginal improvements possible.

Re-Factor:

The ground-up rebuild of your application to make it cloud-native.

Re-Platform:

The movement of systems and servers into a different infrastructure but utilizing some systems that offer instant scaling such as a cloud database or cloud storage

How Do Re-Host, Re-Factor, and Re-Platforming Fit into Legacy Systems?

(For the rest of this article we are going to speak to legacy systems in the hardware realm of Sparc, DEC, PDP, VAX and HP3000)

Circa 1999, I’d speak to retail, manufacturing and insurance IT folks that used mainframes. They wouldn’t ask me about hosting services because they knew that most hosting firms couldn’t help them out with anything except for colocating their hardware. While we wanted to be helpful, we knew we had no expertise when it came to mainframes and so we’d change the topic to bandwidth, circuits or telecom solutions. When we got bold enough to talk mainframes, my talking points would look like this:

‘“We can host your systems in our colocation space with a custom cage”

OR

“When will you be ‘porting’ your application so we can host it?”

So the mainframe IT managers quickly realized that they’d need to retain and recruit talent while buying up hardware they suspected maybe hard to find in the future. I always thought that the company that could figure out the mainframe riddle would be on top of the world since there were so few options. Meanwhile, while mainframes weren’t at the forefront of my everyday thoughts, I kept looking for a solution while re-writing legacy web applications, moving things in (and out) of the cloud and finding colocation for our clients. Somewhere, I stumbled across emulation. Say the word emulation and most folks think nostalgically about their childhood Commodore 64, but in the world of mainframes, it’s almost as exciting.

Emulating Mainframes in the Cloud

Emulation probably falls into the re-platform or re-host strategy. There are emulations we’ve been seen for clients that run DEC PDP, VAX/VMS, HP3000 and Solaris on Sparc which allows us to talk to hosting and cloud companies once again about Mainframes. Here’s the concept:

Diagram concept courtesy of Stromasys
Diagram concept courtesy of Stromasys

Now you can replace legacy mainframe hardware with :

software emulation + Linux + standard x86 hardware

With modern-day operating systems and the really powerful chipsets, we found that the performance of applications is greatly improved even with the emulation layer added. Shrinking racks of hardware into a single rack becomes a reality. So there are now easier paths to take in keeping mainframes alive:

  1. Emulate the legacy hardware/OS, get rid of the legacy hardware and continue to host it yourself using regular x86 hardware.
  2. Emulate the legacy hardware/OS , get rid of the legacy hardware and find a cloud to host it in.

These options open the doors wide to saving money and leveraging someone else’s massive infrastructure investments. You still need to know how to manage your application and how to interact with the legacy OS but just running it on modern-day hardware and operating systems makes it so much better to work with.

We’d like to get into more detail in future articles about emulation as well as creative strategies around deploying client server applications (still widely used in medical and accounting software) and scaling 3 tier architectures in the cloud when refactoring isn’t an option. If you’d like to talk about emulating a mainframe, reach out to us.