top of page

Why smart processor choices are key to AI success


AI has arrived

Nearly two-thirds of global companies have active machine learning efforts. From chatbots to video analysis and inference learning, AI permeates our personal and business lives. Yet the fast-growing diversity means that when it comes to AI hardware, one size no longer fits all. Organizations that will succeed with AI in the new era already upon us will be those that create the most cost-efficient, capable, scalable, silicon infrastructures that can provide a solid foundation for advancing AI.

The logical beginning

Understanding the importance of a portfolio approach to AI chip architecture from AI-optimized CPUs to general purpose accelerators like GPUs, and FPGAs; to purpose-built ASICs including Intel’s forthcoming neural network processors. As Wei Li, Vice President of Intel Architecture, Graphics and Software, and General Manager of Machine Learning and Translation at Intel, puts it: “AI problems demand a variety of silicon.”

The growth of data Organizations are drowning in data — an estimated 90% generated in just the past two years. Analysts forecast worldwide data will grow tenfold by 2025, reaching 163 zettabytes. Yet only an estimated 2% has been analyzed, leaving a great untapped opportunity to propel business and fuel societal insights. In fact, much of the interest and activity in AI is driven by the desire to unlock business value from these growing torrents. According to Garter, through 2023, computational resources used in AI will increase 5x from 2018, making AI the top category of workloads driving infrastructure decisions. “All apps,” says Lisa Spelman, VP of the Intel Data Center Group and General Manager of Intel Xeon Systems, “will have AI built in.” Read more here

bottom of page