Computing Services

The department behind IT services at the University of Bath

Posts By: Steven Chapman

Bath's 6th Annual HPC Symposium

  , , , , , ,

📥  High Performance Computing (HPC), Research

HPC Symposium 2017 - Delegation

We would like to thank everyone who joined us at the 6th Annual HPC Symposium on Monday!

"We have seen the symposium go from strength to strength and become established as a key event for bringing together the University's community of HPC developers and users. This year's symposium was excellent, with talks and posters that highlighted the variety and breadth of HPC usage across the campus, and outstanding keynote talks from two of the UK's leading HPC researchers. My thanks to the organisers who, once again, did a great job."
 - Prof David Bird, Chair of the HPC Management Group

This year, over 60 participants attended from within the University along with guests from the University of Bristol, University of Glasgow and from ClusterVision. There was a vibrant array of multi-disciplined contributions from our two keynote speakers, short talks and quick-fire flash presentations (Symposium Programme). The Symposium provided a great opportunity for networking and for learning about new developments in hardware, software and numerical techniques.

Our first keynote speaker, Prof Simon McIntosh-Smith (University of Bristol), Head of the HPC Research Group, discussed how to deal with the increasingly diverse range of competing architectures that software developers are faced with, many of which are available on Bath's own Balena HPC facility. Prof David Britton (University of Glasgow), who is a member of the ALTAS project at the Large Hadron Collider at CERN and leads the GridPP project, gave our second keynote and talked about the evolution of the Worldwide LHC computing grid and how this specialised research infrastructure is helping our understanding of the Universe.

HPC research at the University is innovative and various

The contributions from our community showcased the sheer variety of innovative HPC work being done across the University, covering:

  • modelling of rare events and aerospace composites;
  • designing buildings for thermal comfort;
  • molecular dynamics;
  • using GPUs to model the sea floor and the low frequency radio sky;
  • simulating quantum lattice states;
  • analysing global air quality; and
  • optimising partial differential equation solvers.

Thank you and congratulations

We would like to say a big thank you to the contributors, attendees, keynote speakers, session chairs and ground crew, who together made this meeting a day to remember.

Lastly, we would like to congratulate our two prize winners, Jack Betteridge (SAMBa CDT, Mathematical Sciences) and Will Saunders (Mathematical Sciences), once again for their outstanding contributed talk and flash/poster presentation.

We look forward to seeing you again next year!

Best wishes,
Dr Steven Chapman (Computing Services)
Dr David Miranda (ACE)
Dr Jonathan Skelton (Chemistry)

 

For more images, see our @BathHPC twitter feed

 

 

 

 

Bath 6th Annual HPC Symposium - 12th June 2017

  , , , ,

📥  High Performance Computing (HPC), Research

We are looking forward to the 6th Annual Symposium on High Performance Computing (HPC) in just a few days time, on Monday, 12th June 2017, over in the Chancellors' Building.

We have had 63 registrations from staff and students across the University. There is an exciting programme lined up with two fantastic keynotes from Prof Simon McIntosh-Smith and Prof David Britton, 9 contributed talks and an active set of quick fire flash poster/talks to look forward to. This will be an excellent opportunity to get an overview of the broad research being done by our growing HPC community here at Bath, and to discuss your own work with others and exchange ideas.

The full schedule is available on the link below and a detailed programme can be found here:
http://www.bath.ac.uk/bucs/services/hpc/symposium/2017/programme/
http://www.bath.ac.uk/bucs/services/hpc/symposium/2017/programme/full-programme.pdf

 

Keynotes speakers:

We are pleased to announce two Keynote speakers for the 6th Annual Symposium on HPC.

The first Keynote will be from Simon McIntosh-Smith, from the University of Bristol, who will give a talk entitled: "Xeon and Pascal and POWER, oh Phi!”: how to cope in a world of increasingly diverse architectures".

Simon is a Professor of High Performance Computing and the Head of the Microelectronics Research Group. Just to mention some of his roles, he is a contributor to both the OpenCL and OpenMP parallel programming standards, regular member of the programme committee for IEEE/ACM SuperComputing and ISC and member of the EPSRC Archer national supercomputer design team.

Simon's research interests include: Performance portability, Application-based fault tolerance (ABFT), New algorithms for novel architectures, Heterogeneous, many-core processor architectures, including GPUs, Xeon Phi, FPGAs, DSPs etc., Scaling applications to run on millions of cores (Exascale computing).
The second Keynote will be from David Britton who is a member of the ATLAS collaboration, one of the experiments at the Large Hadron Collider at CERN (https://home.cern/about). He will give a talk entitled: "Evolution of the Worldwide LHC Computing Grid".

David is a professor of physics at the University of Glasgow and Project Leader of the GridPP project that provides Grid computing for particle physics throughout the UK. He is a member of the ATLAS collaboration, one of the experiments at the Large Hadron Collider at CERN with an interest in Higgs decaying to a pair of tau-leptons. Previously he worked on CMS, another of the LHC experiments, qualifying the crystals that make up the end-caps of the electromagnetic calorimeter. He has also worked at the Stanford Linear Accelerator (the BaBar experiment); Cornell (the CLEO experiment); and at DESY in Hamburg (the ARGUS experiment) with an emphasis on tracking detectors. Earlier work at TRIUMF in Vancouver established the most stringent limits on lepton universality through rare pion decays.

He has been involved with the GridPP project since conception in 2000 and was one of the lead authors of the proposals for all three phases. Initially appointed as Project Manager, he took over as the GridPP Project leader in 2008. GridPP is a collaboration of Particle Physicists and Computing Scientists from 19 UK Universities together with the Rutherford-Appleton Laboratory and CERN, who have built a Grid for Particle Physics.

 

Balena system maintenance: 14th - 21st July 2017

  , , , , , , , ,

📥  High Performance Computing (HPC)

On the weekend on 15/16th July there will be essential maintenance being carried out on the 5 West data-centre. For this work to be carried out, Balena will need to powered down for that weekend. Following on from this planned service interruption we are going to keep Balena offline for the following week to carry out our own essential maintenance on Balena, see details below.

Duration: 4pm, 14th of July to 21st July 2017

ClusterVision and the HPC Team will be working on the system during this period.

To note: -

  • All jobs will need to be dequeued before the Balena shutdown - we will put in a reservation to make sure there are no jobs in running state when the system in shutdown.
  • Balena will not be accessible, including your data, during this maintenance period.

Maintenance activities: -

  • Test emergency shutdown of Balena.
  • Upgrade OS (minor kernel update and security patches) on all nodes
  • Upgrade Bright Cluster Manager
  • Upgrade OFED (Infiniband driver + libraries) on all nodes and reinstall MPI libraries
  • Upgrade MySQL database on master nodes
  • BeeGFS - Parallel File-system
    • Firmware update
    • Enable user quota on inodes and data usage (informational only)
    • Re-tune meta-data partition

Please contact us at hpc-support@bath.ac.uk if you have any issues.

Team HPC

 

EPSRC launches six new Tier2 HPC Centres

  , , , , , , , , , ,

📥  Computing Services, High Performance Computing (HPC), Research

The Engineering and Physical Sciences Research Council (EPSRC) has just launched six new High Performance Computational (HPC) Centres, worth a combined £20 million, at the Thinktank Science museum in Birmingham. These new Tier2 regional centres are aimed at supporting both academics and industry providing access to a diverse range of powerful supercomputers for scientific research and engineering.

These Tier2 systems will sit between the National Tier1 (e.g. ARCHER) and local campus Tier3 systems (e.g. Balena), addressing the gap in capability between these two levels. This new layer of Tier2 HPC will enable new discoveries and drive innovation, it will be open to any UK ESPRC researcher and provide easy local access, and be integrated with the HPC ecosystem across the UK, both vertically into Tier1 and Tier3 systems and horizontally to other Tier2 centres.  The Tier2 Centres will provide access to new and advanced technologies such as Intel’s Knights Landing (KNL) Xeon Phi, NVIDIAs Pascal, POWER8, 64-bit ARM and data burst buffers. To complement this minefield of diverse technologies, all six Centres will provide support though Research Software Engineers (RSEs) to assist with the training and skills development, porting and optimisation of applications and codes.

 

Great Western 4 (GW4) HPC Centre for Advanced Architectures: Isambard
Isambard, after Victorian engineer Isambard Kingdom Brunel, will be the first of its kind 64-bit ARM-based supercomputer providing multiple advanced architectures within the same system to enable evaluation and comparison across a diverse range of hardware platforms in a production environment. This is a joint project between the GW4 Alliance (universities of Bath, Bristol, Cardiff and Exeter) together with Cray Inc. and the Met Office. The service will provide over 10,000 high performance ARMv8 cores, as well as NVIDIA P100 and Intel Xeon Phi (KNL) accelerators cards.
http://gw4.ac.uk/isambard/

A National Facility for Peta-scale Data Intensive Computation and Analytics: Peta-5
Peta-5, a multi-disciplinary facility providing large-scale data simulation and high performance data analytics designed to enable advances in material science, computational chemistry, computational engineering and health informatics. This project, led by University of Cambridge, will provide peta-flop performance across Intel’s KNL, GPU, x86 computational architectures and peta-scale storage on spinning disk and tape storage.

Hub in Materials and Molecule Modelling: Thomas
Thomas, after the polymath Thomas Young, a Materials and Molecular Modelling (MMM) Hub will have applications in energy, healthcare and the environment. The project is led by UCL, with partners in the Materials Chemistry Consortium (MCC) and UKCP, and will provide a large x86-64 based system.
https://www.ucl.ac.uk/news/news-articles/1116/021116-ucl-hpc-hub-materials-science

Joint Academic Data science Endeavour: JADE
JADE, led by the University of Oxford, will provide the largest GPU-based system in the UK. Working with NVIDIA, will provide around 3.7 Petaflops of performance using NVIDIA’s latest DGX-1 platform, which makes use of the new NVlink technology.  This centre will be optimised for Deep Learning which will greatly benefit research applications involved in machine learning, image/video/audio analysis and molecular dynamics.
http://www.ox.ac.uk/news/2017-03-30-£3m-awarded-oxford-led-consortium-national-computing-facility-support-machine

HPC Midlands Plus
HPC Midlands Plus will primarily focus on data-intensive applications in fields ranging from engineering, manufacturing, healthcare and energy. Led by Loughborough University, the Centre will provide a large x86 based system, to accompany this will be a modest sized component of POWER8 systems each with 1TB memory.
http://www.lboro.ac.uk/media-centre/press-releases/2017/february/32m-funding-for-midlands-based-high-performance-computing-centre.html

Edinburgh Parallel Computing Centre (EPCC): Cirrus
Cirrus, will be hosted by the Edinburgh Parallel Computing Centre (EPCC). It will offer over 10,000 x86 cores to both science and industry. This will nicely complement EPSRC’s National HPC service, ARCHER. As part of this project, EPCC will host a mini-RDF (Research Data Facility) to provide common object-based data store to the Tier2 ecosystem and other supercomputers.
https://www.epcc.ed.ac.uk/facilities/cirrus

 

Recent press releases about the launch:

  1. https://www.epsrc.ac.uk/newsevents/news/sixhpccentresofficiallylaunch
  2. https://www.top500.org/news/uk-antes-up-20-million-for-six-new-supercomputer-centers
  3. http://www.bath.ac.uk/news/2017/03/30/gw4-world-first-supercomputer-launched-at-national-exhibition

 

 

Save the date - 6th Annual Symposium on HPC on 12th June 2017

  , , , , ,

📥  High Performance Computing (HPC), Research

We are pleased to announce that the 6th Annual Symposium on High Performance Computing will be healed on Monday 12th June 2017.

This symposium organized by the University of Bath we will bring together staff, researchers and students working with HPC in the different areas of science and engineering along with other invited specialists of the field.

Save the date in your agenda and don’t miss the opportunity for networking and get in touch with the exciting developments on HPC.

Best regards,
The Organising Committee

 

Balena maintenance - 2nd February 2017

  , , , ,

📥  High Performance Computing (HPC)

During Inter-Semester break, on Thursday 2 February between 09:00 and 13:00, we will be placing the cluster into maintenance mode whilst we perform failover tests between the pair of master nodes and BeeGFS node couplets.

These tests will ensure that pairs of master nodes and BeeGFS node couplets are in good working order should an unexpected system issue occur that triggers a system failover.

While the system failovers are being tested, all users will be able to access data from /home and /beegfs, but you may notice a momentary freeze while the storage areas are transferred between the failover pairs. All compute nodes will be placed into a scheduler reservation to prevent any workloads from running while these tests are carried out.

Sorry for the short notice of this announcement, I hope this will not cause too much disruption for anyone.

Team HPC

 

GW4 joins industry partners to develop ‘first of its kind’ supercomputer

  , , , , , , , ,

📥  High Performance Computing (HPC), Research

GW4 Alliance, together with Cray Inc. and the Met Office, has been awarded £3m by EPSRC to deliver a new Tier 2 high performance computing (HPC) service for UK-based scientists. This unique new service, named ‘Isambard’ after the renowned Victorian engineer Isambard Kingdom Brunel, will provide multiple advanced architectures within the same system in order to enable evaluation and comparison across a diverse range of hardware platforms. It will also provide a service to the community that will enable algorithm development and the porting of scientific codes. Isambard will include 10,000+ ARMv8 64-bit cores, in addition to a smattering of x86 CPUs, Intel Knights Landing Xeon Phi processors, and NVIDIA P100 GPUs.
For more information, please see the below press releases:

Bath's press release:
http://www.bath.ac.uk/research/news/2017/01/17/gw4-joins-industry-partners-to-develop-%E2%80%98first-of-its-kind%E2%80%99-supercomputer/

GW4 press release:
http://gw4.ac.uk/news/gw4-joins-industry-partners-to-develop-first-of-its-kind-supercomputer/

 

Mathematicians use Balena to model global air pollution

  , , , , , , ,

📥  High Performance Computing (HPC), Research

Dr Gavin Shaddick, of the University of Bath’s Department of Mathematical Sciences and Deputy Director of the Bath Institute for Mathematical Innovation, has been leading an international research team, which includes the WHO, and have used the University of Bath's High Performance Computing (HPC) system, Balena, to investigate global air pollution.

The results show that nine out of ten people (92 per cent) on Earth live in places where air pollution is higher than acceptable limits – even when they are outside.

For more information visit:

http://www.bath.ac.uk/news/2016/09/27/air-pollution/

http://www.wired.co.uk/article/uk-air-pollution-levels-london

http://www.bath.ac.uk/research/news/2016/09/21/air-pollution/

 

 

Balena Maintenance - 8th to 12th August 2016

  , ,

📥  High Performance Computing (HPC)

The maintenance work will begin on Monday 8th August and is expected to take up to a week to complete. During this maintenance window there will be no access to the Balena system and all queued jobs will need to be cleared from the scheduler.

The majority of this work will be performed by ClusterVision. We are anticipating needing a full week to give ClusterVision and ourselves enough time to complete these maintenance tasks. We shall open up access once all disruptive tasks have been completed.

Below is a list of some of the maintenance work which will be taking place:

  • Upgrading the SLURM scheduler, security patching and enabling new features
  • Testing SLURM's node power management
  • Enabling global file locking on the BeeGFS scratch partition
  • ClusterVision will also be configuring new system monitoring tools

Regards,
Team HPC

 

5th Annual HPC Symposium was a great success

  , , , , , , ,

📥  High Performance Computing (HPC), Research

Attendees of the 5th HPC Symposium

We would like to thank everyone who joined us at the 5th Annual Bath HPC Symposium yesterday. Following the trend from previous years, the event was a great success for the University.

This year represented the fifth symposium and the event was attended by over 50 participants from within the university, collaborators, as well as external partners including Intel. The schedule for the day included contributions from across the University: physics, maths, chemistry, biology; mech-eng, elec-eng and chem-eng; management and architecture. These contributions showcased the sheer variety of innovative HPC work being done across the University, covering including molecular dynamics and quantum-chemical simulations through to high level finite-element methods to solve engineering problems. The Symposium provided a great opportunity for networking and for learning about new developments in hardware, software and applications.

We would like to say a big thankyou to the contributors, attendees, keynote speakers, session chairs and ground crew, who together made this meeting a day to remember.

Also, we would like to congratulate our two prize winners, Tobias Brewer and Will Saunders, once again for their outstanding poster presentation and contributed talk.

Best wishes,
Steven, Jonathan and Roshan