Monday, 7 May 2018

Intel Positions Xeon as Machine Learning Competitor in Inference Workloads

https://ift.tt/eA8V8J

The performance gap between GPUs and CPUs for deep learning training and inference has narrowed, and for some workloads, CPUs now have an advantage over GPUs. For machine translation which uses RNNs, the Intel Xeon Scalable processor outperforms NVidia* V100* GPU by 4x on the AWS ...

from Google Alert - CPU processor https://ift.tt/2FS9ztQ
via IFTTT

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...