Avoid it if you can. Embrace it wholeheartedly if you must...
The Message Passing Interface (MPI) is a standard API (application program interface) and set of tools for building and running distributed parallel programs on almost any parallel computing architecture.
MPI includes libraries of subprograms that make it as simple as possible to start up a group of cooperating processes and pass messages between the processes. It also includes tools for running, monitoring, and debugging MPI jobs.
There are many implementations of MPI, but the emerging standard is Open MPI, which evolved from the best features of several earlier open source implementations. Open MPI is free and open source, so you can rest assured that projects you develop with open MPI will never be orphaned.
It is important to note that MPI programs do not require a cluster to run. MPI is also effective in taking advantage of multiple cores in a single computer. MPI programs can even be run on a single-core computer for testing purposes, although this won't, of course, run any faster than a serial program on the same machine, and may even take slightly longer. Other parallel programming paradigms might be easier to use or faster than MPI on a shared memory architecture, but if you may want to run across multiple computers, programming in MPI is a good investment of time.
The bottom line is that you can use the same MPI programs to utilize multiple cores on a single computer, or a cluster of any size suitable for the job. This makes MPI the most portable applications programming interface (API) for parallel programs. You can also develop and test MPI code on your own PC, and later run it on a larger cluster. Some users may find this approach preferable, since development on a local PC with their preferred tools can be faster and more comfortable, and it reduces the risk of impacting other cluster users with untested code.