Intel is known for pushing the edge of technology innovation, both creating and leading major advancements in the industry. We did it first with the move from mainframes to standards-based servers, and then again with the move to cloud computing and software-defined infrastructure.
Artificial intelligence (AI) is the next big wave of compute that will transform the way businesses operate and how people engage with the world. And while Intel is inherently well-positioned to support the machine learning revolution — after all, Intel processors power more than 97 percent of servers deployed to support machine learning workloads1 — we know that to truly lead the industry, we must do more.
Our industry needs breakthrough compute capability — capability that is both scalable and open — to enable innovation across the broad developer community. Last week at the Intel Developer Forum (IDF), we provided a glimpse into how we plan to deliver the industry-leading platform for AI:
- Commitment to open source with optimized machine learning frameworks (Caffe, Theano) and libraries (Intel® Math Kernel Library – Deep Learning Neural Network, Intel Deep Learning SDK).
- Disclosure of the next-generation Intel® Xeon™ Phi processor, codename Knights Mill, with enhanced variable precision and flexible, high-capacity memory.
- Today we completed the acquisition of Nervana Systems, bringing together the Intel engineers who create the Intel® Xeon® and Intel Xeon Phi processors with Nervana’s machine learning experts to advance the AI industry faster than would have otherwise been possible.
AI is nascent today, but we believe the clear value and opportunity AI brings to the world make it instrumental for tomorrow’s data centers. Intel’s leadership will be critical as a catalyst for innovation to broaden the reach of AI. While there’s been much talk about the value of GPUs for machine learning, the fact is that fewer than 3 percent of all servers deployed for machine learning last year used a GPU.
It’s completely understandable why this data, coupled with Intel’s history of successfully bringing new, advanced technologies to market and our recent sizable investments, would concern our competitors. However, arguing over publicly available performance benchmarks is a waste of time. It’s Intel’s practice to base performance claims on the latest publicly available information at the time the claim is published, and we stand by our data.
As data sets continue to scale, Intel’s strengths will shine. The scope, scale and velocity of our industry underscores the importance of broad, open access to AI innovations. And the industry clearly agrees. Consider these testimonials: From Baidu’s Jing Wang: “The increased memory size Intel Xeon Phi provides makes it easier for us to train our models efficiently.” From University of Washington’s Prof. Pedro Domingos: “Intel is in the leading position to bring us the hardware and the architectures to foster this open community that we really do need to make progress.”
Jason Waxman is corporate vice president in the Data Center Group and general manager of the Data Center Solutions Group at Intel Corporation.
1Source: Intel estimate
Photo at top: Baidu’s Jing Wang joined Diane Bryant at the Intel Developer Forum to talk about how the Intel Xeon Phi processor makes it easier for Baidu to train their models efficiently.