Intel has introduced a new collection of data center processors designed to accommodate the varied needs of hybrid cloud, artificial intelligence, 5G, and edge computing workloads.
“With our newest Xeon Scalable processors, our ecosystem can create solutions that matter, and our customers can place their workloads securely where they perform the best, from the edge to the network to the cloud,” said Intel CEO Pat Gelsinger this morning in a launch webcast.
Based on Intel’s 10nm process, the 3rd Gen Intel Xeon Scalable family (formerly code-named “Ice Lake”) features 57 SKUs offering a mix of one-, two-, four-, and eight-socket configurations with as many as 40 cores. The platform supports up to 6 TB of memory per socket, up to 8 channels of DDR4-3200 memory per socket, and up to 64 lanes of PCIe Gen4 interface capacity per socket.
The new chips also support 200 series Intel Optane persistent memory SKUs, Intel’s P5800X Optane SSD, and Intel D5-P5316 NAND SSDs, as well as Intel Ethernet 800 Series adapters and Intel Agilex FPGAs.
On average, Intel says, the Ice Lake platform delivers a 46% performance improvement over previous-generation Xeon Scalable products on popular data center workloads, and up to a 2.65x average performance gain versus comparable five-year-old systems.
Cloud users in particular, according to Intel, can expect to experience up to 50% better performance on database, e-commerce, web server, and other latency-sensitive workloads, as well as 65% better analytics speeds.
The new processors are designed to support bi-directional workload migration between on-premises and online environments as well. “For example, if a customer is using VMware on premise, migrating their virtual machines between the public cloud and their on-prem environment is seamless,” said Lisa Spelman, corporate vice president and general manager of Intel’s Xeon and Memory Group, during this morning’s broadcast.
The platform’s broad software compatibility, she added, enables consistent application performance across cloud environments.
Drawing on Intel Deep Learning Boost technology, Intel says, 3rd Gen Xeon Scalables provide a 74% increase in inference performance on AI solutions generally versus earlier generations and up to 1.56x better image classification performance more specifically. In testing, Intel claims, 3rd Gen Xeon Scalable chips performed up to 1.5x faster across 20 AI workloads than AMD’s EPYC 7763 CPU and up to 1.3x faster than NVIDIA’s A100 GPU.
The new platform’s four network optimized “N-SKUs,” Intel says, deliver up to an average 62% better performance than predecessors on common network and 5G tasks. When combined with Intel Ethernet 800 series adapters and eASIC chips, the manufacturer adds, users can double their 5G MIMO performance within a similar power envelope.
Security features in the new portfolio include Intel’s Total Memory Encryption, Platform Firmware Resilience, and Software Guard Extensions (SGX) technologies. The last of those, which partitions data and application code in isolated memory “enclaves”, was previously available on Intel Xeon E series processors and now ships as well on dual-socket Xeon Scalable processors capable of protecting up to 1 TB of content.
Intel Crypto Acceleration technology in the new platform increases the performance of encryption-intensive workloads like SSL web serving, Intel says, allowing organizations to secure sensitive data without slowing down applications.
Collectively, SKUs in Xeon Scalable family are designed for an emerging data center landscape in which workloads will reside in a mix of settings with equally diverse requirements, according to Navin Shenoy, executive vice president and general manager of Intel’s Data Platforms Group.
“The data centers of the future will look very different from how they do today,” said Shenoy this morning. “They’ll be more distributed in size and location, built on both public and private clouds. Compute, storage, and memory will be increasingly disaggregated, leveraging pools of connected infrastructure. Security will be architected in at the chip level. Flexibility will extend across hardware, software, applications, and services deployed as smaller units called microservices for the faster development time and time to value. CPUs and XPUs will work closely together to solve increasingly complex problems across these distributed environments.”
Intel shipped more than 200,000 3rd Gen Xeon Scalable units in the first quarter of the year, Intel says. Microsoft, Google, and other public cloud leaders will introduce virtual machines powered by the new chips soon, as will over 20 high-performance computing labs and HPC-as-a-service providers.
Today’s launch comes shortly after the unveiling of “IDM 2.0,” an integrated device manufacturing model that will see Intel spend some $20 billion on two new fabs in Arizona and offer outsourced foundry capacity to global customers while making expanded use of third-party foundry capacity itself.
It also comes weeks after AMD released a third generation of its 7nm EPYC server processors offering accelerated performance and enhanced security.