Research Facilities
We have three supercomputer resources in our group, each tailored for different tasks. We also have access to the White Rose Grid, including machines at York and Sheffield.
Edred

Edred is a commercial Beowulf cluster purchased by the condensed matter theory group in November 2008. Edred is the successor to the old cluster "Erik", which was named after a viking who lived in York. The system utilises a high bandwidth-low latency network, allowing the computation to be divided accross a large number of CPUs. The system runs commercial linux and consists of the following hardware:
- Head Node
- Dual AMD Opteron Quad Core CPUs
- 8GB DDR2 RAM
- 500 GB RAID 5 Data Storage
- 32 Compute Nodes
- Dual AMD Opteron Quad Core CPUs
- 16GB DDR2 RAM
- 500 GB Scratch Space
- Infinipath high speed interconnect
- Ethernet administration and filesystem network
- Peak compute performance of approximately 1 TFlop
- Power consumption of 10 kW
Wohlfarth
Wohlfarth is a traditional Beowulf cluster made entirely from commodity components, ie desktop PCs. These are then linked together with an Ethernet network to form a collective machine. The original system was purchased in November 2008 (then called grendel), with upgrades in December 2009 and September 2010. The relatively low performance of the interconnecting network means that Wohlfarth is best suited to running batch jobs, where several independent serial jobs are run in batch mode. The prime advantage of this system is cost, being four times less expensive than the commercial system. The system runs Ubuntu server linux and presently consists of the following hardware:
- Head Node
- Core i7 920 Quad Core CPU
- 6GB DDR3 RAM
- 1 TB Software RAID 1 Home Storage
- 6 TB Data Storage
- 24 Compute Nodes
- Various CPUs - AMD Phenom II X4 945/925, Intel Core i5, AMD Athlon II X4
- Various RAM sizes, mainly 8GB DDR2 ECC RAM, some with 4GB DDR2/DDR3 RAM
- 500 GB Scratch Space
- One high memory node with dual Quad Core Opteron CPUs, 32 GB RAM for FFT calculations
- 4 x Nvidia GTX 260 Graphics Cards for GPGPU/CUDA acceleration
- Ethernet network
- Grid Engine queuing system and resource manager
- Peak compute performance of approximately 500 GFlops (930 GFlops including graphics cards [dp])
- Power consumption of 5 kW