Today's video processors, also known as graphics processing units (GPUs), require an enormous amount of computing power in order to quickly render the rapidly changing 3-dimensional graphics of animation software onto a 2-dimensional screen.
Most of this development was initially driven by the video game industry. Ironically, a multi-billion dollar industry was pushed forward largely by people whose primary mode of transportation is a skateboard. Never underestimate the power of a grassroots movement.
Scientists eventually recognized that the processing power of GPUs could be harnessed for scientific computations. In some cases, a GPU can offer much faster processing than a traditional CPU. However, utilizing GPUs requires rewriting software using advanced tools and libraries. Hence, GPUs are not useful for the majority of scientific computing. We cannot just add a GPU to our PC and magically watch all of our programs run 1000 times faster.
We can make a few general statements about GPU computing:
Like every new technology, GPU computing quickly became over-hyped. It became a solution looking for problems. The hype has subsided as people became aware of how much effort is involved in utilizing GPUs and where the added programmer hours are actually a worthwhile investment.
GPU boards are now being developed and marketed explicitly for scientific computation rather than graphics rendering. These processors are more aptly called accelerators, but the term GPU is still widely used.
CUDA is a proprietary system for utilizing nVidia video processors in scientific computing. Using a system such as CUDA, programmers can boost the performance of programs by running computations in parallel on the CPUs and the GPUs.
One caveat is that while GPUs have a great deal of processing power, they are designed specifically for graphics rendering, and are therefore not well suited to every computational application. The interface to a GPU may lend itself very well to some applications, and not so well to others. The subject is complex, and readers are advised to research whether GPUs would fit their needs before investing in new hardware.
Nevertheless, some sites have invested in clusters with large numbers of GPUs in order to gain the best performance for certain applications that lend themselves well to GPU computing. A fair number of GPU-based programs have already been developed and having access to a pool of GPUs allows such software to be easily utilized. They are especially popular in machine-learning applications, which require enormous amounts of computation.
OpenCL is an alternative to CUDA which is fully open source, supports GPUs from multiple vendors rather than just nVidia, and can also utilize CPUs. With OpenCL, the same parallel code can utilize both CPU and GPU resources, so programming effort is not duplicated to take advantage of different hardware. Work is also underway to bring GPU support to OpenMP.
What will adding a GPU/accelerator do for most of your current software? Why?
Is a GPU a good investment for scientific computing?