Unlocking the power of software: the Message-Passing Interface
Edinburgh’s expertise in supercomputing has led to the creation of standardised Message-Passing Interface programming, allowing software to be used across different hardware systems and unlocking new potential in parallel computing.
Edinburgh Parallel Computing Centre (EPCC), the supercomputing centre at the University of Edinburgh, helped lead the way in creating the standardised Message-Passing Interface (MPI) programming system to enable faster, more powerful problem-solving using parallel computing.
MPI is now the ubiquitous de-facto standard among both hardware and software vendors. Parallel computing systems are everywhere, from the world’s most powerful supercomputers to multi-core laptops.
However, differences in how computer hardware vendors support parallel programs mean it can be difficult to write a program that will run well across different computers. MPI allows the same software to be run on different parallel computers. Without it, software would need to be rewritten before it could be used on different hardware. With it, users are free to choose hardware that meets their budgetary, operational and computational needs. EPCC continues to contribute to the development of MPI.
Exascale: the future of computing
MPI is an integral part of the effort to develop exascale computing: systems that can perform more than a billion billion calculations per second. EPCC and supercomputer manufacturer Cray are collaborating on new programming models to enhance the interfaces for exascale computing.
The ubiquity of MPI for high-performance computing (HPC) programming at all scales means that, at the very least, a migration path from MPI to any exascale solution must be found for all existing scientific software. The continued progress of innovative computational science, and the ongoing benefits to humanity that it brings, will depend on efficient exploitation of exascale machines, which in turn depends on making MPI ready for exascale.
EPCC researchers are also working with commercial and research organisations including Intel, Cray, Cisco, MPICH and OpenMPI to improve support for hybrid and multi-threaded programming, which complement MPI and offer another way to unlock the power of parallel computers. As part of the European EPiGRAM project, EPCC is using MPI to adapt software for modelling turbulence and space weather to enable it to make use of exascale resources.
A 2011 survey of the largest HPC systems in Europe, undertaken by the PRACE project, found that all of their users employed MPI in some form.
Today, a Cray MPI library is distributed with every Cray system and is the dominant method of parallelisation used on Cray systems. Cray reported total revenue of over $420 million in 2012, so this is a large industry which is heavily reliant on MPI and the work that EPCC contributed.
Slashing job times
EPCC's MPI expertise has benefited many companies over the past two decades. For example, as part of a partnership with Scottish Enterprise called Supercomputing Scotland, EPCC has worked with Integrated Environmental Systems (IES), the world’s leading provider of software and consultancy services focused on energy efficiency within the built environment.
With MPI, IES’s SunCast software can now run on a supercomputer, creating massive time savings for the company. In one case, analysis time was reduced from 30 days to 24 hours.
Using MPI in IES Consultancy has increased the efficiency and therefore profitability of our own consultancy offering.
MPI in a commercial environment
A lecture by Steven Turner entitled ‘Solar analysis - the journey from two weeks to one hour with HPC’ discusses EPCC’s work with Integrated Environmental Systems.
Open source solution
MVAPICH, an open-source implementation of MPI that was designed for use in high-performance computing, has recorded more than 182,000 downloads and more than 2,070 users in 70 countries. Among those users are 765 companies.
The European Union-funded EPiGRAM project is preparing Message Passing programming models for exascale systems.
Download this case study
Download a printable version of this case study as a PDF.