The past 20 years saw a significant migration toward cloud-based services and applications. Many companies sought to outsource most computing to enhance applications and take advantage of the higher scaling and lower costs of the cloud. However, more recently the opposite is true.
Technology companies in particular are transitioning their computing and server infrastructure away from third-party public services and toward on-premises deployments.
This trend doesn’t appear to be letting up any time soon. In fact, IDC estimates that half of the spending on server and storage infrastructure in 2021 was driven by on-premises deployments and that 71% of enterprises are repatriating cloud workloads to improve cost and control. This same report suggests that on-prem spending is projected to grow at an annual rate of 2.9% over the next five years and is estimated to reach $77.5 billion by 2026.
Most public cloud services simply cannot offer the compute performance needed for more advanced AI and HPC-intensive workloads – at the cost and scalability levels larger organizations would like to have. Moreover, with the additional flexibility in technology choices and improved energy efficiency for servers, the demand for “general purpose” systems is dropping as today’s businesses look for optimized solutions for their specific workloads.
What will this mean for the future of on-prem workload capabilities and data center design? How will server architecture preference play a role in the transition? Evaluating the current hardware infrastructure is the first step to improving both the business and end-user experiences.
Cost Efficiency and Privacy
Companies seeking a solution strong enough to handle computing requirements during their highest-usage periods without interruption or any additional costs find that custom-built servers are both more efficient and, in total, less expensive than all-in-one style hardware in most public clouds. Though many still leverage these clouds for cold storage, computing may be better done on-prem. Better security and data privacy are also benefits; keeping computing closer to home means safer and more reliable information management.
For organizations handling sensitive information, a high level of control is key. Twitter, for example, operates three data centers in the U.S., with hundreds of thousands of servers and multiple petabytes of storage, an infrastructure replicated by many companies. While these servers may be hosted in co-location facilities or public cloud facilities, there is a need to design and manage an organization’s own infrastructures, identifying and addressing any capacity issues with an on-prem operations team. Public cloud resources are then used only for certain workloads and data storage that is designed to act as a counterpart to cloud IT systems.
To those arguing against this kind of investment, many organizations find that long-term cost efficiency far outweighs the challenges that come with fine-tuning server deployments for their own services and infrastructure ownership, most of which are still barriers regardless of chosen infrastructure.
Better Performance, Data Privacy, Energy Efficiency, and Workload Optimization
Energy efficiency is another key reason for a data center overhaul. Each generation of CPU and GPU does more work per watt of electricity used than the previous generation, which means existing workloads can be completed with less electrical need, or additional workloads can be performed at the same power usage. This improvement is more precisely quantified with benchmarking and power testing on the application. For example, an outdated dual-socket system may be replaced with a more recent offering with a more efficient single-socket system. Energy use significantly drops, and licensing fees may as well, all while maintaining SLAs to the user from the data center.