Are you being asked to provide GPUs to your application developers and data scientists for machine learning or high performance computing? Are users asking for more than one GPU to be usable for their application? Are you interested in cost-effective ways to share GPUs across the entire data science team? If any of these types of questions apply to you, then this new E-Book from VMware on the key decisions to take about GPU use on vSphere will be a great read for you.
GPUs provide the computing power needed to run machine learning programs efficiently, reliably and quickly. These GPU-based workloads are even more versatile, flexible, and efficient when they run in virtual machines on the VMware ESXi™ hypervisor. Can multiple GPUs be supported in a VM on vSphere? The answer to that is most decidedly yes. There are several ways of getting this done and you will want to know the trade-offs in doing it one way over the other. The E-book positions the use of GPUs squarely in the context of machine learning algorithms, which is receiving a huge amount of attention in the market. It then covers the implementation approaches for GPUs at a briefing level and gives you pointers to other sources of technical detail on each different approach. This will suit the decision-maker and IT architect in developing their strategy on how to address this emerging accelerator area.
Here is a recent webinar from VMware that discusses these GPU and machine learning topics in technical details also. You can sign up to watch this at your convenience.
Take a look at a recent VMware Chief Technology Office perspective on machine learning for more background information in this field
You can also find a lot more technical reading material on this machine learning and GPU topic at the VMware applications blog page