Information Services

High-performance computing

Learn more about Eddie, our state-of-the-art research compute cluster.

The ECDF Linux Compute Cluster (Eddie)

Eddie Mark 3 is the third iteration of the University's compute cluster and is available to all University of Edinburgh researchers. It consists of over 7000 Intel® Xeon® cores with up to 3 TB of memory available on a single compute node. Research groups can take advantage of priority compute and guaranteed throughput for their projects by requesting an allocation in the priority compute tier.

More Information on the Eddie Compute Cluster

More Information on Priority Compute

Why Use Eddie?

Eddie can cut the time taken to compute problems by running the software in parallel, or by breaking the problem into a number of more easily addressed sub-tasks, each of which can be run on a separate cpu in parallel. Examples of the speed improvements include speeding up the processing of brain scans from a Schizophrenia study by a factor of 400 (28hrs compared to more than 1 year {469 days}). A protein structure prediction study which involved 810,000 simulations and used 1.5 CPU years of computation was completed in less than 2 days.

Researchers can also get help in understanding their storage and compute requirements, how these can be most effectively provided, and support in developing proposals including these requirements. If you’d like to explore how Eddie can transform your research, please request a consultation with our team by contacting the IS Helpline.

GPGPU Acceleration

GPGPU stands for General-Purpose computation on Graphics Processing Units. Currently we have 32 compute nodes, equipped with a total of 44 NVIDIA Tesla K80, and 80 NVidia TitanX devices. These support the CUDA toolkit for GPGPUs written by NVIDIA.

Symmetric Multiprocessing (SMP) and Large Memory Systems

Large memory jobs and shared memory programs using methods such as OpenMP (Open Multi-Processing) can make use of a range of memory offerings per node. We currently have compute nodes ranging from 64GB to 3TB RAM.

More Information about our Memory Provisioning