Data Centre Services
Platform Solutions
About the team
We are a small team with two apprentices who provide hosting, maintenance and support for the compute and storage hardware for the large compute and storage farms and private cloud storage platforms within the Wellcome Genome Campus Data Centre. The hardware and software environments we support are relied on by the scientists and administrators within the Sanger Institute, EMBL’s European BioInformatics Institute (EMBL-EBI), Cancer Research UK and the wider Campus to conduct their science.
We provide a year-round 24/7 service to ensure maximum availability to the compute and storage hardware. To enable to the Sanger Institute and the Campus to deliver their bold and ambitious science at scale we work in partnership with our colleagues in Informatics and Digital Solutions (IDS) to build in as much capacity and flexibility as possible by continually modifying our technologies and approaches and horizon scanning for new opportunities.
Focus of our work
Our data centre environment is unlike that of any other data centre in Western Europe as we continually seek to push the limits of what is possible to maximise capacity in a space that is generally considered to be quite inflexible.
To achieve this we focus on:
- compute and storage capacity
– by maximising the density of compute within a limited physical space - connections to national and campus networks
– by negotiating and securing high-speed and dark fibre access for our onsite and offsite data centres and carrying out our own specialist cabling on Campus - cooling systems
– by employing the latest technologies and techniques to enable increased compute and storage density - disaster recovery
– by procuring and maintaining robust offsite data centres with dark fibre connections - future-proofing
– by horizon scanning for, and helping to shape, developing data centre technologies to accomodate new computing approaches and our scientists’ strategic research goals - power supply
– by ensuring uninterrupted power supplies and deploying new technologies to fulfil future needs - sustainability
– by employing flexibility and innovation to ensure that the solutions we deploy supply not only current needs, but can adapt to meet future challenges too.
Sustainability
One key area we focus on is sustainability. The Data Centre is the single largest consumer of power on Campus, so we take efficiency and minimising waste very seriously. We achieve this by:
- seeking to be as efficient as possible with our processes
- ensuring that the hardware we procure and install is the best possible match to meet the ambitious strategic aims of the Institute and the Campus both now and for future research requirements.
Flexibility and Innovation
The defining characteristic of the Wellcome Genome Campus is its unbridled ambition to conduct daring and innovative genomic science at a scale few, if any can match. This bold ambition drives not only the Campus’ researchers but also everyone who supports their work.
The Sanger Institute’s and Campus’ scientists continually explore and embrace new technology and computational approaches and our goal is to provide infrastructure solutions that can flex to accommodate them. To achieve this we work with our colleagues across the IDS hardware and software teams to horizon scan, identify and procure the most relevant technologies to meet the demands of incoming and future hardware. For example, we are exploring what role quantum computing technology might play in the future and how we can deploy our infrastructure, hardware and systems to seamlessly deploy it when or if required.
We also employ this flexibility within the team to continually innovate and modify our processes and hardware to provide the maximum value and capacity possible. One area that we are actively investigating is how we can meet and exceed the needs of artificial intelligence and machine learning environments. In this way we seek to enable the Campus to grow and lead the way in cutting-edge science.
And now we are focussed on delivering higher density hardware wherever we can to extract the maximum compute and storage that we can from the physical space of the Data Centre. There aren’t many research organisations in the life sciences sector that strive to have the levels of density in their data centres that we do. We lead the world in this respect.
And the future is looking even more exciting in terms of the density of compute and storage that we are seeking to deliver. We are now exploring 80KiloWatt racks and water cooling systems. The Campus is set to expand enormously over the next 20 years and we are working closely with the Campus development teams to ensure that the systems and infrastructure we put in place have the capacity to meet the needs not just of the Sanger Institute, but the broader Campus as whole over that time.
Knowledge, Resilience and Networking
To ensure that we supply the most valuable, knowledgeable and insightful service possible, the whole team is actively engaged in training to keep up to date with the latest technology. All our team have taken part in industry-standard data centre management training.
In addition, we are fully versed in the technology that we have deployed so that we can take full responsibility for the support and maintenance of the Data Centre’s operation. For example a number of our team have undergone DDN’s specialist training to run our significant DDN estate.
We also provide Data Centre cabling training, a specialist skill that is not taught or learnt outside of the Data Centre environment. Because of this, we are able to do all our own cabling and terminations within the Data Centre, leading to faster turnaround times, greater resilience and significant cost savings.
We also network with the national and global Data Centre community to discover new ideas and help shape future developments. Every year we go to a number of conferences in the UK, including Data Centre World where we have been invited to give talks. We also attend the ISC High Performance event – the largest in the world.
Talent development
We look to develop our talent internally wherever possible – we grow our own. For example, we took on an apprentice a few years ago and he is now a full-time employee. We have now taken on two additional apprentices to build on the value of this scheme and to increase the strength and depth of the team.
We are proud that our apprenticeship training is tailored specifically to prepare our apprentices to be fully knowledgeable in the skills and experience needed to manage a data centre. Often national schemes tend to focus more on the hardware side or the network side and don’t include the very specific needs of maintaining and developing a data centre. Because managing a Data Centre spans both aspects, we provide training across both disciplines. The unique training package we have developed in partnership with the Sanger Institute’s Learning and Development team have enabled us to build in additional tailored knowledge as well.
Software and Updates
We procure, support and run the software applications that sit close to the hardware stacks and manage the infrastructure to ensure optimal operation. We link these to our Data Centre Information Management System to comprehensively visualise of all the hardware that is stored in the Data Centre, along with all our environmental and power metrics, to allow informed management and planning.
We are also instrumental in firmware upgrades for the hardware. For example, we have upgraded all the iRODS platforms, which is a significant estate. We have multi-petabyte farms for storage platforms. In many ways it is like painting the Forth Road Bridge, as soon as we have finished updating all the machines, there’s a new version that needs to be added.
Long-term Storage and Disaster Recovery
A key area of our work is maintaining the integrity and availability of the data generated on Campus. The need for long-term storage, retrieval and archiving of data in formats and on hardware that is robust, future-proofed and scalable is a pressing issue for the genomics research community as a whole. So we are working in close partnership with our colleagues across IDS – and the Infrastructure Management Team in particular – to ensure that the premier science that has been carried out on the Campus is available – and continues to be easily accessible – whenever it is required.
We have robust back up and disaster recovery systems in place to prevent data loss. We operate a number of offsite data centres to provide disaster recovery for which we have negotiated dark fibre connections that can support multiple 100 gigabyte connections. An additional benefit is that the architecture we use means that many of these platforms are also able to provide primary services for our external collaborators, further speeding global research.