Data science and AI help for SMEs in Cheshire and Warrington

Hi! I am Tim Powell, a Business Development Manager at the Hartree Centre. In this blog post I am going to be talking about a relatively new funding opportunity for SMEs that I’m working on at the moment, Cheshire & Warrington 4.0. 

Tim Powell, Business Development Manager, STFC Hartree Centre

So, what is CW4.0? 

Cheshire and Warrington 4.0 (CW4.0) is an EDRF fully-funded programme of hands on support for businesses in Cheshire and Warrington focused on the exploration and adoption of digital technologies. The programme is built on the success of LCR 4.0 which supported over 300 companies in the Liverpool City Region to develop new products, decrease time to market, and accelerate productivity and turnover – all while creating 125 new jobs!  

Through the CW4.0 programme SMEs in Cheshire and Warrington can access technical expertise from our team of experts here at the Hartree Centre. Our data scientists and software engineers have a strong track record of working on collaborative projects to solve industry challenges. To give you an idea, here are some examples of the areas we work in:

  • Artificial Intelligence applications, including machine learning and natural language processing 
  • Predictive maintenance and data analytics 
  • Modelling and simulation 
  • Software development and optimisation 
  • Cloud migration and applications 
  • IoT (Internet of Things) integration 

Our first CW4.0 engagement has already kicked off with G2O Water Technologies, Tristan Philips the VP of Engineering has this to say about his hopes for the outcome of the project: 

“Being able to do Computational Fluid Dynamics at Hartree is essential to model and design enhanced membranes that are able to filter almost unfilterable waters, extract precious materials from water streams and decarbonise the water industry.”

Tristan Phillips, VP Engineering
G2O Water Technologies

We have also just kicked off a project with Chester-based Circles Health & Wellbeing who are looking to develop an AI chatbot for assistance in mental health services and are developing more projects in the pipeline covering areas such as predictive maintenance, using machine learning to improve routing algorithms and building data warehouses. 

“We are excited to be working with STFC on this hugely important healthcare project. Mental health patient numbers are ever-growing and placing a huge strain on healthcare services which are buckling under the pressure. Working with the Hartree Centre – a respected AI development partner – will enable us to build a dedicated healthcare assistant solution that will set a benchmark for similar future conversational AI assistants, delivering cost-efficient, patient-centric support services that enhance a client’s healthcare experience, build confidence in more human/tech blended healthcare solutions and deliver positive, measurable outcomes. The pressure to get this right is colossal and we are delighted to have such a talented and knowledgeable partner to work alongside us.”

Tom Mackarel , Director and Co-founder
Circles Health & Wellbeing

How does it work? 

CW4.0 projects can vary from creating a brand new proof of concept (PoC) or minimum viable product (MVP) to help accelerate a start-up to market or to add value to an existing product through digitisation. The process of engaging with us on a CW4.0 project is simpler than many other grant applications.  

After an initial discussion with me to define the challenge statement followed by an eligibility check, I engage with our technical staff to write a project scope that will look to create a custom solution to a company’s specific industry challenge. The project scope will be presented back to the company for fine tuning before we go ahead and submit the final application with each CW4.0 technical project typically lasting 2 – 4 months. 

The process works really well for companies who already know how and what they want to innovate on but if your company is interested in digital innovation and not sure which direction to take or the options available to you, don’t worry, we can help with that too.  

CW4.0 is also designed to help signpost companies in the right direction by offering a fully funded, risk-free, feasibility study or digital innovation report. Our experience working across a wide range of industries from engineering and manufacturing and life sciences to energy, professional services and transport will be used alongside our technical expertise to benefit you. The feasibility study or digital innovation report will be created working alongside your company as domain experts to discover what will work best for you.

Manufacturing your digital future | CW4.0

Not just digital innovation – from virtual to physical 

Here at STFC, alongside the Hartree Centre there is another department who are delivering support as part of CW4.0 so I would like to take some time to showcase how the Campus Technology Hub (CTH) can also benefit SMEs across Cheshire and Warrington. 

Companies can access a range of 3D printing capabilities and explore how 3D printing could aid product development and streamline manufacturing processes to reduce time and costs and look at rapid prototyping of complex designs on a project-by-project basis. With 3D printers ranging from desktop-sized, fused deposition modelling printers that can print in a variety of plastics, through to industrial metal 3D printers and material varying from plastics like PLA or ABS, to material reinforced with fibreglass or carbon fibre, resin polymers and 316 stainless steel – the possibilities are endless! 

To find out more about accessing support from the Campus Technology Hub specifically, you can contact my colleague Michaela at michaela.kiernan@stfc.ac.uk

Am I eligible? 

The main eligibility criteria for CW4.0 are that the company is classed as an SME, haven’t used the allocated state aid, and have a registered premise in the postcode catchment area below: 

Cheshire Warrington Chester 
CW1 WA1 CH1 
CW2 WA2 CH2 
CW3 WA4 CH3 
CW4 WA5 CH4 
CW5 WA6 CH64 
CW6 WA7 CH65 
CW7 WA8 CH66 
CW8 WA13  
CW9 WA16  
CW10   
CW11   
CW12   

Who can help me? 

To discuss how the Hartree Centre can provide innovation support to your business, help increase productivity, access new markets, kickstart new product and job creations and enable growth through CW4.0, please get in touch with at info@candw4.uk. 


Part-funded by the European Regional Development Fund (ERDF), CW4.0 brings together the combined expertise and capabilities of the Virtual Engineering Centre (University of Liverpool), Liverpool John Moores University, the Science and Technology Facilities Council (STFC) and the Northern Automotive Alliance. 

HPC is Now | Supercomputing 2019

In November 2019, the Science and Technology Facilities Council (STFC) Hartree Centre and Scientific Computing Department exhibited at international conference Supercomputing 2019 (SC19) in Denver, USA. In this blog post, Research Software Engineer Tim Powell shares some thoughts and insights from the Hartree Centre team.

Hartree Centre team members attending Supercomputing 2019.

The variety of experiences one can have at Supercomputing is vast, and I think this is a good echo for the direction high performance computing (HPC) is going. The number of different disciplines that are adopting HPC and the different techniques available to acquire your computing power are growing more diverse. When discussing the themes of SC19 with a colleague (in the stationery room of all places) I accidentally summed it up quite well: “Supercomputing 2019 was tall and broad.”

So let’s look at each aspect of this assessment – first up: “tall”. The next phase of supercomputing is exa-scale. There was a significant number of talks, birds-of-a-feather, and panels discussing exa-scale computing, the applications, software, and hardware.

Our Chief Research Officer, Vassil Alexandrov, gives his account of Supercomputing 2019 and the current exa-scale landscape here:

“Supercomputing 2019 was a busy time for me, as always! In the discussions and talks I attended, I felt that this year’s content was of an even higher quality than previous years, and I noted that there were more precise presentations delivered by researchers.

One area which I paid particular attention to was the discussion around exa-scale. The US National Labs are making big moves with their Exa-Scale Computing Project. They are investing $1.8 billion in hardware and a similar amount for the development of software. The current US roadmap is to have their first machine, Frontier, in place in Q3 of 2021 costing an estimated $400 million. With another two machines to be delivered in 2022, each costing $600 million. All 3 machines are expected to be exa-scale and are rumoured to be a combination of AMD, Intel, Cray, and NVIDIA.

Europe are also heading towards exa-scale computing – eight centres across Europe are going to host large peta-scale and pre-exa-scale machines in their program to developing exa-scale capabilities, with machines expecting to reach 150-200 peta-flops. Japan is about to install their Post-K supercomputer which is based on ARM processors and it is likely to be a very efficient machine. The expectation is for it to be operational early 2020 so I am excited to see what the results will be when it is up and running. China is also a player but that is behind closed doors at the moment. It will be interesting to see what they reveal.

Throughout SC19, it was clear that the software challenges are going to be harder than the hardware challenges. My opinion is that we are still a few years off from having true exa-scale machines.”

Vassil Alexandrov chairs the 10th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems for academia and industry alongside Prof. Jack Dongarra (UTK & ORNL), Al Geist (ORNL) and Dr Christian Engelmann at Supercomputing 2019.

Now, let’s talk about how SC19 was broad”.

More so this year, than in previous years, have the different applications of HPC become so obvious. Multitudes of National Laboratories and Research Institutes from around the globe were seen displaying use cases on their stands in the exhibition hall, and there was a large variety of different topics discussed in talks and panels. There was, quite literally, something for everyone – assuming you have an interest or involvement in computation that is!

I think this is largely due to the growth in access to data, and new techniques such as machine learning and artificial intelligence (AI) requiring disciplines that traditionally don’t use HPC to access more computing resource. Additionally, with the massively growing offering of cloud computing resource, the barrier to entry has been significantly reduced and it is easier than ever to provision a cluster-in-the-cloud.

So tall is more powerful computing, and broad is more computing applications. This all accumulates in a bigger impact of High Performance Computing, which again was echoed at SC19 with a series of talks in the 1st HPC Impact Showcase.

My personal highlight this year at SC19 was participating in the Building the Future panel at the 6th SC Workshop on Best Practices for HPC Training and Education. The all-day workshop focused around common challenges for enhancing HPC training and education, and allowed the global community to share experiences and resources to address them. The Building the Future panel focused the discussion around how we as trainers and educators can best prepare for the future of HPC and the training and education needs it will bring. The key take-away from my talk was that there is a diverse future of applications for HPC and we need to help facilitate the power of HPC to non-HPC experts who are only just finding uses for it.

Tim Powell speaks at the Building The Future panel during the 6th SC Workshop on Best Practices for HPC Training and Education.

On the following day I was fortunate enough to attend the Early Careers Program, aimed at people in the first few years of their career in HPC and delivering a variety of activities, talks, and panels. It was great to see STFC represented by Catherine Jones and Alison Kennedy. As a Research Software Engineer (RSE) I particularly enjoyed panels and talks involving RSE and members from the RSE Societies around the globe. It’s great to see that managing research software properly is being put on the international stage at conferences as big as SC! I also noted that in a series of talks on cloud computing, a lot of time was given over to discussing the advantages (rarely the disadvantages) of tailor-made HPC in the cloud.

As a team, we had great fun facilitating a very popular build-your-own Lego supercomputer activity, in the form of our very own Scafell Pike! Needless to say, our limited supplies disappeared quicker and quicker each morning as the word spread. Our HPiC Raspberry Pi cluster was also present, boasting some new and updated demos developed by our recent summer placement students James and Lizzie!

The Hartree Centre takes its supercomputer Scafell Pike to Supercomputing 2019… in Lego form!

I also spoke to some of my colleagues to get their own perspectives on SC19. Aiman Shaikh, Research Software Engineer, discussed her first time at the conference:

“I really enjoyed being part of the Women in HPC workshop, and attending technical talks around containers in HPC and LLVM compilers. The networking events held by different vendors was also a great opportunity to meet people. There was so much going on everywhere that it was difficult to keep pace with everything!

HPC and Cloud Operations at CERN was a very interesting talk by Maria Girone, who talked about technologies used at CERN, software and architecture issues and how they are investigating machine learning (ML) for object detection and reconstruction.

The Women in HPC workshop was really good, especially the keynote from Bev Crair, Lenovo, on “the butterfly effect of inclusive leadership”. Bev said that diverse teams lift performance by inviting in creativity, which I completely agree with. Another inspiring and motivating talk by Hai Ah Nam from Los Alamas National Lab talked about surviving difficult events and minimising their impact to your career. Hai explained that we cannot stop unforeseen events in life but we can focus on how to tackle them. The Women in HPC networking events, often joined by many diverse groups of people, provided a great chance to network with attendees from all different backgrounds.

The journey of exploration did not ended after SC as afterwards I went to the Rockies with some colleagues, which was fun-filled few days walking and with so little light pollution we could see the Milky Way at night!”

Aiman Shaikh gets involved in the Women in HPC workshop at Supercomputing 2019.

SC19 was a new experience for Research Software Engineer Drew Silcock too:

“Attending SC19 for the first time really exposed me to the wider scientific computing community. I gained an understanding of the various technologies used by the scientists and engineers and for what purposes they were used. Many are scaling their applications with standard MPI+ OpenMP stacks, but I attended several interesting workshops and talks about alternative technologies and approaches. Of particular interest to me are all topics relating to the development and programming languages and compilers, so I very much enjoyed hearing from people working on and with the LLVM compiler toolchain, additions to the C++ standard and the development of domain-specific languages for scientific computing.

In terms of trends, it’s exciting to see how many people are starting and continuing to use Python for scientific computing. Cloud services are also becoming increasingly relevant, especially for new companies without on premise capabilities. As machine learning models get bigger and bigger, there is more effort being put into bridging the gaps between the HPC and ML communities to ensure that they can benefit each other.”

Jony Castagna, a NVIDA Deep Learning Ambassador with 10 years experience in HPC and several years experience in Deep Learning, shared his thoughts:

“We’re seeing fast-growing applications of Deep Learning for science. Three different approaches have been identified: support/accelerate current algorithms like via AI precondition or matrix solver through Neural Networks (NN); solve partial differential equation using NN but enforcing physical information (via Physical Informed Neural Networks, PINN); fully replacing physical equations with NN trained using numerical simulation data. In particular this latest approach seems most attractive as it seems to show the capability of NN in learning the physics from data and extrapolate further at higher speed. For example, in the work of Kadupitiya, Fox and Jadhao, a simple NN has been used to predict the contact density of ions in Nanoconfinement using trained data from a Molecular Dynamic (MD) simulation. A strong match between prediction and MD simulation has been presented.

An increasing use of C++17 standard library has emerged for performance portability. Many paradigms, like Kokkos, RAJA, HPX, etc. have been presented as possible solution for targeting different architectures. However, NVIDIA doesn’t look to be standardising the heterogeneous programming, they expect the hardware to become more homogeneous between CPU and GPU. We’d like to test NN with DL_MESO to see how well they perform in reproducing coarse grain simulation. We have also applied for an ECAM2 project to port DL_MESO on C++17 and use Kokkos for performance portability. This will allow us to compare performance with the current CUDA version and understand how well Kokkos can perform.”

James Clark and Aiman Shaikh attend talks by Mellanox Technologies at Supercomputing 2019.

High Performance Software Engineer James Clark concluded:

“On Sunday I presented at the Atos Quantum Workshop. This was a showcase of how the Hartree Centre is using our Quantum Learning Machine, such as our joint training and access programme with Atos and our ongoing project work with Rolls-Royce.

I also talked about our future plans to develop quantum software that can take advantage of both quantum computing and HPC.

One of the most interesting developments in HPC this year was how far ARM CPUs have come. Riken and Fujitsu’s Fugaku is one of the major success stories, with the first deployment of the new SVE (Scalable Vector Extensions) instructions. Fujitsu announced that Cray will be bringing their ARM CPUs to the rest of the world. NVidia also announced that their GPGPUs will be supported on ARM platforms, with a number of ARM CPUs listed as supported on release. I am looking forward to the increased competition in the hardware space turns out, especially with AMD’s Rome CPUs and Intel’s Xe GPUs. The future of HPC looks to be very interesting and it’s an exciting time to be involved.”

I couldn’t have said it better myself!