GPU parallel computing for machine learning in Python: how to build a parallel computer by Yoshiyasu Takefuji
Requirements: ePUB PDF or AZW3 Reader,, 1.2 MB
Overview: This book illustrates how to build a GPU parallel computer. If you don’t want to waste your time for building, you can buy a built-in-GPU desktop/laptop machine. All you need to do is to install GPU-enabled software for parallel computing. Imagine that we are in the midst of a parallel computing era. The GPU parallel computer is suitable for machine learning, deep (neural network) learning. For example, GeForce GTX1080 Ti is a GPU board with 3584 CUDA cores. Using the GeForce GTX1080 Ti, the performance is roughly 20 times faster than that of an INTEL i7 quad-core CPU. We have benchmarked the MNIST hand-written digits recognition problem (60,000 persons: hand-written digits from 0 to 9). The result of MNIST benchmark for machine learning shows that GPU of a single GeForce GTX1080 Ti board takes only less than 48 seconds while the INTEL i7 quad-core CPU requires 15 minutes and 42 seconds.
Genre: Computers & Technology
Download Instructions:
http://festyy.com/wX4cTn
http://festyy.com/wX4cTR