School of Informatics

Mark Parsons

Mark Parsons is Professor of High Performance Computing in the School of Physics and Astronomy at The University of Edinburgh.

Professor Parsons' inaugural lecture, "A billion billion calculations in a second - a step too far?", will take place 5.15pm Thursday 14 January 2016, Lecture Theatre A, James Clerk Maxwell Building, King’s Buildings.

Refreshments in the foyer afterwards. RSVP Carol Borthwick, email:, tel: 0131 650 5249.


Mark Parsons is Executive Director of EPCC, the supercomputing centre at the University, and Associate Dean for e-Research within the College of Science and Engineering. He joined EPCC in 1994 after a PhD in particle physics studying the differences between quark and gluon jets on the LEP collider at CERN. Before that he gained an MSc in IT – Parallel Theme at Edinburgh and BSc (Hons) in Physics and Digital Microelectronics from the University of Dundee.

His first role at EPCC was as a software developer. He graduated from this role to that of Commercial Manager in 1997 and subsequently the Commercial Director until his promotion to Executive Director in 2011. He was appointed to his Personal Chair in High Performance Computing in June 2013.

His many interests in High Performance Computing form two broad themes: stimulating the use of HPC by UK and European companies, and exploring novel next generation HPC. In the first category he currently leads the Fortissimo and Fortissimo 2 European projects with a value of €34 million and over 160 partners, many manufacturing SMEs. In the second category he leads the €8 million NEXTGenIO project which is developing the software stack for the next generation of memory technologies with Intel and Fujitsu. He has a growing interest in the use of computing and data analytics for medical research.

Abstract of lecture

Supercomputing, and the modelling and simulation it enables, is a fundamental tool in the solution of many of the world’s most pressing research challenges. Over the past 30 years, supercomputers have grown from a handful of processor cores to today’s multi-Petaflop systems with hundreds of thousands of such cores. The next frontier is the Exascale – performing a billion billion calculations per second. To do this we must harness the combined calculating power of hundreds of millions of cores. To succeed we need to rethink how translate the equations we use to describe the world around us into accurate, scalable simulations. This is the biggest supercomputing challenge we have ever faced. Will we solve it or is the Exascale a step too far?

Related links


School of Physics and Astronomy (