IT and Business Insights for SMB Solution Providers

Building AI and Machine Learning Workstations

High-performance computing is required, but there’s little standardization yet and table stakes is learning Linux. By James E. Gaskin

ARTIFICIAL INTELLIGENCE (AI) and machine learning (ML) are the next big thing, but capitalizing on their potential takes specialized hardware with serious processing power. Can any builder who makes servers throw a couple of extra NVIDIA GPUs in the box and call it an AI workstation? Not exactly.

Oh, and system builders who want an AI hardware business will need to learn to program too.

While workstations for ML aren’t all that different than, say, gaming systems, the challenging part is the customer conversation, according to Jon Bach, founder and president of Puget Systems, a custom desktop and workstation builder in Auburn, Wash. “They write mostly in Python for [machine learning], either open source or from scratch, so you talk about the code,” he says. “Is it accelerated? Does it scale across multiple GPUs? Use shared memory across those cards?” Each of those answers will help decide the best hardware to use.

Rob Enderle, principal analyst for the Enderle Group, suggests working with the AI developer to define the platform, data feeds, and interconnects. “Someone doing workstations is in a good place to start,” he says, since AI systems are built on a workstation, rather than PC, foundation.

System builders must also be aware that the state of AI and ML hardware is somewhat chaotic, with little standardization. Bach says hardware needs won’t settle for at least five years, and Enderle believes it will take a decade for production AI systems to be a catalog number with no customization. That said, it may be just a matter of time before HP, Dell, and others dilute profits in the custom market by offering “standard” AI boxes like just another server.

Currently NVIDIA dominates the market with a range of GPUs. The company just spent $6.9 billion to acquire Mellanox Technologies, a firm specializing in high-performance computing in data centers, which will boost the development of the massively parallel systems needed for ML. NVIDIA also announced in March a program providing Quadro RTX GPU board and CUDA-X AI software to Dell, HP, and Lenovo.

And Intel? CPUs in AI and ML systems may become little more than traffic cops scheduling the dedicated GPUs doing all the heavy lifting. Example: Puget Systems built mostly dual-socket AI workstations two years ago, but today uses single-socket motherboards. It shouldn’t surprise you that Intel is working on its own GPU to challenge NVIDIA, but nothing is expected on the market until 2020.

As an alternative to workstations, cloud services will grow in importance, says Enderle. “Companies go with clouds to avoid buying hardware that will be obsolete in a year.”

About the Author

James E. Gaskin's picture

JAMES E. GASKIN is a ChannelPro contributing editor and former reseller based in Dallas.

ChannelPro SMB Magazine

Get an edge on the competition

With each issue packed full of powerful news, reviews, analysis, and advice targeting IT channel professionals, ChannelPro-SMB will help you cultivate your SMB customers and run your business more profitably.