What our customers think | Managing our Impact

Hear from our Head of Impact Management Karen Lee and find out about her role at the Hartree Centre and her highlights from our recent commercial outcomes survey.

Karen Lee, Head of Impact Management

Anyone working in the research and innovation eco-system will be very familiar with the concept of ‘impact’ and generating benefits to the UK economy and its people.

As an applied research centre focused on supporting industry in their adoption of transformational digital technologies such as AI and data analytics, this one word, impact, summarises what the Hartree Centre is all about. My role is to help us show it.

The recent AI Activity in UK Businesses report estimates that companies’ annual expenditure on AI technologies was over £62 billion in 2020. With the right conditions in place (for example by reducing the barriers to adoption… which is what we do), they predict that total AI expenditure could grow 11-17% annually over the next five years. Whilst the Impact of AI on Jobs report estimates that such technologies could boost our economy by as much as 10% of GDP by 2030. So we are talking pretty significant numbers here!

It’s exciting that the Hartree Centre (and therefore my talented colleagues) is one of the delivery mechanisms to achieve this. As their Head of Impact Management, my focus is on demonstrating the benefits from our portfolio of projects and programmes by helping us to measure and understand the value we contribute to the organisations we collaborate with, as well as to the wider economy and society.

One way this is done is by capturing information – via surveys and interviews – on customers’ experiences of working with us and tracking this over time. We’ve just published an independent report which has provided useful insight into our portfolio of commercial projects.


The key findings from the report are based on 31% of past user organisations engaging in commercial projects completed up to the end of January 2021.

The Commercial Beneficiary Outcomes report is interesting to me on a number of levels. First off, I want to put a shout out to all of the people that make the Hartree Centre ‘work’ and I am proud that 94% of respondents stated they already had, or would, recommend us to others. The work my colleagues do is remarkable – it is right on the cutting-edge and more often than not beyond my realm of understanding.

Back to the organisations… even if you just look at the sectors they operate in, it really does show that digital technologies pretty much touch every part of the economy.

And these businesses are not all at the same stage of their digital transformation journey either. Some are just starting out, with an idea in mind but unsure of whether it would work or where it might take their business, whilst others are optimising processes or product development. I think this is demonstrated in the range of outcomes reported, which indicate important improvements in productivity, performance, skills and R&D investment. For example:

  • 76% reported that the strategic importance of adopting and applying digital technologies had increased
  • 65% have seen an increased investment in R&D within their organisation
  • 84% have improved the extent to which their organisation uses or exploits data.
  • 89% have increased their in-house technical expertise and capabilities

Other benefits reported included:

  • Enhanced confi­dence in products and services
  • Improved effectiveness of product development
  • Optimised processes
  • Reduced product development costs
  • Reduced time to market
  • Increased sales or profi­tability
  • Enhanced reputation

Although after a project you’ll get a picture of the early outcomes and future potential, the nature of innovation is such that full benefits often take time to be realised. Our survey reflects this in that 79% of participants said that they expect to see further commercial benefits over the next 1-3 years.

Following this research, as part of the Hartree Centre’s continuous improvement work, we’ve enhanced our evidence collection so that we capture information before, after and 2-3 years after our projects to track how benefits accrue to achieve impact over time.

If you’d like to find out more, take a look at the public summary of the report or our infographic. We also have some fantastic case studies which tell a powerful story of some of our individual client projects.


Data science and AI help for SMEs in Cheshire and Warrington

Hi! I am Tim Powell, a Business Development Manager at the Hartree Centre. In this blog post I am going to be talking about a relatively new funding opportunity for SMEs that I’m working on at the moment, Cheshire & Warrington 4.0. 

Tim Powell, Business Development Manager, STFC Hartree Centre

So, what is CW4.0? 

Cheshire and Warrington 4.0 (CW4.0) is an EDRF fully-funded programme of hands on support for businesses in Cheshire and Warrington focused on the exploration and adoption of digital technologies. The programme is built on the success of LCR 4.0 which supported over 300 companies in the Liverpool City Region to develop new products, decrease time to market, and accelerate productivity and turnover – all while creating 125 new jobs!  

Through the CW4.0 programme SMEs in Cheshire and Warrington can access technical expertise from our team of experts here at the Hartree Centre. Our data scientists and software engineers have a strong track record of working on collaborative projects to solve industry challenges. To give you an idea, here are some examples of the areas we work in:

  • Artificial Intelligence applications, including machine learning and natural language processing 
  • Predictive maintenance and data analytics 
  • Modelling and simulation 
  • Software development and optimisation 
  • Cloud migration and applications 
  • IoT (Internet of Things) integration 

Our first CW4.0 engagement has already kicked off with G2O Water Technologies, Tristan Philips the VP of Engineering has this to say about his hopes for the outcome of the project: 

“Being able to do Computational Fluid Dynamics at Hartree is essential to model and design enhanced membranes that are able to filter almost unfilterable waters, extract precious materials from water streams and decarbonise the water industry.”

Tristan Phillips, VP Engineering
G2O Water Technologies

We have also just kicked off a project with Chester-based Circles Health & Wellbeing who are looking to develop an AI chatbot for assistance in mental health services and are developing more projects in the pipeline covering areas such as predictive maintenance, using machine learning to improve routing algorithms and building data warehouses. 

“We are excited to be working with STFC on this hugely important healthcare project. Mental health patient numbers are ever-growing and placing a huge strain on healthcare services which are buckling under the pressure. Working with the Hartree Centre – a respected AI development partner – will enable us to build a dedicated healthcare assistant solution that will set a benchmark for similar future conversational AI assistants, delivering cost-efficient, patient-centric support services that enhance a client’s healthcare experience, build confidence in more human/tech blended healthcare solutions and deliver positive, measurable outcomes. The pressure to get this right is colossal and we are delighted to have such a talented and knowledgeable partner to work alongside us.”

Tom Mackarel , Director and Co-founder
Circles Health & Wellbeing

How does it work? 

CW4.0 projects can vary from creating a brand new proof of concept (PoC) or minimum viable product (MVP) to help accelerate a start-up to market or to add value to an existing product through digitisation. The process of engaging with us on a CW4.0 project is simpler than many other grant applications.  

After an initial discussion with me to define the challenge statement followed by an eligibility check, I engage with our technical staff to write a project scope that will look to create a custom solution to a company’s specific industry challenge. The project scope will be presented back to the company for fine tuning before we go ahead and submit the final application with each CW4.0 technical project typically lasting 2 – 4 months. 

The process works really well for companies who already know how and what they want to innovate on but if your company is interested in digital innovation and not sure which direction to take or the options available to you, don’t worry, we can help with that too.  

CW4.0 is also designed to help signpost companies in the right direction by offering a fully funded, risk-free, feasibility study or digital innovation report. Our experience working across a wide range of industries from engineering and manufacturing and life sciences to energy, professional services and transport will be used alongside our technical expertise to benefit you. The feasibility study or digital innovation report will be created working alongside your company as domain experts to discover what will work best for you.

Manufacturing your digital future | CW4.0

Not just digital innovation – from virtual to physical 

Here at STFC, alongside the Hartree Centre there is another department who are delivering support as part of CW4.0 so I would like to take some time to showcase how the Campus Technology Hub (CTH) can also benefit SMEs across Cheshire and Warrington. 

Companies can access a range of 3D printing capabilities and explore how 3D printing could aid product development and streamline manufacturing processes to reduce time and costs and look at rapid prototyping of complex designs on a project-by-project basis. With 3D printers ranging from desktop-sized, fused deposition modelling printers that can print in a variety of plastics, through to industrial metal 3D printers and material varying from plastics like PLA or ABS, to material reinforced with fibreglass or carbon fibre, resin polymers and 316 stainless steel – the possibilities are endless! 

To find out more about accessing support from the Campus Technology Hub specifically, you can contact my colleague Michaela at michaela.kiernan@stfc.ac.uk

Am I eligible? 

The main eligibility criteria for CW4.0 are that the company is classed as an SME, haven’t used the allocated state aid, and have a registered premise in the postcode catchment area below: 

Cheshire Warrington Chester 
CW1 WA1 CH1 
CW2 WA2 CH2 
CW3 WA4 CH3 
CW4 WA5 CH4 
CW5 WA6 CH64 
CW6 WA7 CH65 
CW7 WA8 CH66 
CW8 WA13  
CW9 WA16  
CW10   
CW11   
CW12   

Who can help me? 

To discuss how the Hartree Centre can provide innovation support to your business, help increase productivity, access new markets, kickstart new product and job creations and enable growth through CW4.0, please get in touch with at info@candw4.uk. 


Part-funded by the European Regional Development Fund (ERDF), CW4.0 brings together the combined expertise and capabilities of the Virtual Engineering Centre (University of Liverpool), Liverpool John Moores University, the Science and Technology Facilities Council (STFC) and the Northern Automotive Alliance. 

HPC is Now | Supercomputing 2019

In November 2019, the Science and Technology Facilities Council (STFC) Hartree Centre and Scientific Computing Department exhibited at international conference Supercomputing 2019 (SC19) in Denver, USA. In this blog post, Research Software Engineer Tim Powell shares some thoughts and insights from the Hartree Centre team.

Hartree Centre team members attending Supercomputing 2019.

The variety of experiences one can have at Supercomputing is vast, and I think this is a good echo for the direction high performance computing (HPC) is going. The number of different disciplines that are adopting HPC and the different techniques available to acquire your computing power are growing more diverse. When discussing the themes of SC19 with a colleague (in the stationery room of all places) I accidentally summed it up quite well: “Supercomputing 2019 was tall and broad.”

So let’s look at each aspect of this assessment – first up: “tall”. The next phase of supercomputing is exa-scale. There was a significant number of talks, birds-of-a-feather, and panels discussing exa-scale computing, the applications, software, and hardware.

Our Chief Research Officer, Vassil Alexandrov, gives his account of Supercomputing 2019 and the current exa-scale landscape here:

“Supercomputing 2019 was a busy time for me, as always! In the discussions and talks I attended, I felt that this year’s content was of an even higher quality than previous years, and I noted that there were more precise presentations delivered by researchers.

One area which I paid particular attention to was the discussion around exa-scale. The US National Labs are making big moves with their Exa-Scale Computing Project. They are investing $1.8 billion in hardware and a similar amount for the development of software. The current US roadmap is to have their first machine, Frontier, in place in Q3 of 2021 costing an estimated $400 million. With another two machines to be delivered in 2022, each costing $600 million. All 3 machines are expected to be exa-scale and are rumoured to be a combination of AMD, Intel, Cray, and NVIDIA.

Europe are also heading towards exa-scale computing – eight centres across Europe are going to host large peta-scale and pre-exa-scale machines in their program to developing exa-scale capabilities, with machines expecting to reach 150-200 peta-flops. Japan is about to install their Post-K supercomputer which is based on ARM processors and it is likely to be a very efficient machine. The expectation is for it to be operational early 2020 so I am excited to see what the results will be when it is up and running. China is also a player but that is behind closed doors at the moment. It will be interesting to see what they reveal.

Throughout SC19, it was clear that the software challenges are going to be harder than the hardware challenges. My opinion is that we are still a few years off from having true exa-scale machines.”

Vassil Alexandrov chairs the 10th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems for academia and industry alongside Prof. Jack Dongarra (UTK & ORNL), Al Geist (ORNL) and Dr Christian Engelmann at Supercomputing 2019.

Now, let’s talk about how SC19 was broad”.

More so this year, than in previous years, have the different applications of HPC become so obvious. Multitudes of National Laboratories and Research Institutes from around the globe were seen displaying use cases on their stands in the exhibition hall, and there was a large variety of different topics discussed in talks and panels. There was, quite literally, something for everyone – assuming you have an interest or involvement in computation that is!

I think this is largely due to the growth in access to data, and new techniques such as machine learning and artificial intelligence (AI) requiring disciplines that traditionally don’t use HPC to access more computing resource. Additionally, with the massively growing offering of cloud computing resource, the barrier to entry has been significantly reduced and it is easier than ever to provision a cluster-in-the-cloud.

So tall is more powerful computing, and broad is more computing applications. This all accumulates in a bigger impact of High Performance Computing, which again was echoed at SC19 with a series of talks in the 1st HPC Impact Showcase.

My personal highlight this year at SC19 was participating in the Building the Future panel at the 6th SC Workshop on Best Practices for HPC Training and Education. The all-day workshop focused around common challenges for enhancing HPC training and education, and allowed the global community to share experiences and resources to address them. The Building the Future panel focused the discussion around how we as trainers and educators can best prepare for the future of HPC and the training and education needs it will bring. The key take-away from my talk was that there is a diverse future of applications for HPC and we need to help facilitate the power of HPC to non-HPC experts who are only just finding uses for it.

Tim Powell speaks at the Building The Future panel during the 6th SC Workshop on Best Practices for HPC Training and Education.

On the following day I was fortunate enough to attend the Early Careers Program, aimed at people in the first few years of their career in HPC and delivering a variety of activities, talks, and panels. It was great to see STFC represented by Catherine Jones and Alison Kennedy. As a Research Software Engineer (RSE) I particularly enjoyed panels and talks involving RSE and members from the RSE Societies around the globe. It’s great to see that managing research software properly is being put on the international stage at conferences as big as SC! I also noted that in a series of talks on cloud computing, a lot of time was given over to discussing the advantages (rarely the disadvantages) of tailor-made HPC in the cloud.

As a team, we had great fun facilitating a very popular build-your-own Lego supercomputer activity, in the form of our very own Scafell Pike! Needless to say, our limited supplies disappeared quicker and quicker each morning as the word spread. Our HPiC Raspberry Pi cluster was also present, boasting some new and updated demos developed by our recent summer placement students James and Lizzie!

The Hartree Centre takes its supercomputer Scafell Pike to Supercomputing 2019… in Lego form!

I also spoke to some of my colleagues to get their own perspectives on SC19. Aiman Shaikh, Research Software Engineer, discussed her first time at the conference:

“I really enjoyed being part of the Women in HPC workshop, and attending technical talks around containers in HPC and LLVM compilers. The networking events held by different vendors was also a great opportunity to meet people. There was so much going on everywhere that it was difficult to keep pace with everything!

HPC and Cloud Operations at CERN was a very interesting talk by Maria Girone, who talked about technologies used at CERN, software and architecture issues and how they are investigating machine learning (ML) for object detection and reconstruction.

The Women in HPC workshop was really good, especially the keynote from Bev Crair, Lenovo, on “the butterfly effect of inclusive leadership”. Bev said that diverse teams lift performance by inviting in creativity, which I completely agree with. Another inspiring and motivating talk by Hai Ah Nam from Los Alamas National Lab talked about surviving difficult events and minimising their impact to your career. Hai explained that we cannot stop unforeseen events in life but we can focus on how to tackle them. The Women in HPC networking events, often joined by many diverse groups of people, provided a great chance to network with attendees from all different backgrounds.

The journey of exploration did not ended after SC as afterwards I went to the Rockies with some colleagues, which was fun-filled few days walking and with so little light pollution we could see the Milky Way at night!”

Aiman Shaikh gets involved in the Women in HPC workshop at Supercomputing 2019.

SC19 was a new experience for Research Software Engineer Drew Silcock too:

“Attending SC19 for the first time really exposed me to the wider scientific computing community. I gained an understanding of the various technologies used by the scientists and engineers and for what purposes they were used. Many are scaling their applications with standard MPI+ OpenMP stacks, but I attended several interesting workshops and talks about alternative technologies and approaches. Of particular interest to me are all topics relating to the development and programming languages and compilers, so I very much enjoyed hearing from people working on and with the LLVM compiler toolchain, additions to the C++ standard and the development of domain-specific languages for scientific computing.

In terms of trends, it’s exciting to see how many people are starting and continuing to use Python for scientific computing. Cloud services are also becoming increasingly relevant, especially for new companies without on premise capabilities. As machine learning models get bigger and bigger, there is more effort being put into bridging the gaps between the HPC and ML communities to ensure that they can benefit each other.”

Jony Castagna, a NVIDA Deep Learning Ambassador with 10 years experience in HPC and several years experience in Deep Learning, shared his thoughts:

“We’re seeing fast-growing applications of Deep Learning for science. Three different approaches have been identified: support/accelerate current algorithms like via AI precondition or matrix solver through Neural Networks (NN); solve partial differential equation using NN but enforcing physical information (via Physical Informed Neural Networks, PINN); fully replacing physical equations with NN trained using numerical simulation data. In particular this latest approach seems most attractive as it seems to show the capability of NN in learning the physics from data and extrapolate further at higher speed. For example, in the work of Kadupitiya, Fox and Jadhao, a simple NN has been used to predict the contact density of ions in Nanoconfinement using trained data from a Molecular Dynamic (MD) simulation. A strong match between prediction and MD simulation has been presented.

An increasing use of C++17 standard library has emerged for performance portability. Many paradigms, like Kokkos, RAJA, HPX, etc. have been presented as possible solution for targeting different architectures. However, NVIDIA doesn’t look to be standardising the heterogeneous programming, they expect the hardware to become more homogeneous between CPU and GPU. We’d like to test NN with DL_MESO to see how well they perform in reproducing coarse grain simulation. We have also applied for an ECAM2 project to port DL_MESO on C++17 and use Kokkos for performance portability. This will allow us to compare performance with the current CUDA version and understand how well Kokkos can perform.”

James Clark and Aiman Shaikh attend talks by Mellanox Technologies at Supercomputing 2019.

High Performance Software Engineer James Clark concluded:

“On Sunday I presented at the Atos Quantum Workshop. This was a showcase of how the Hartree Centre is using our Quantum Learning Machine, such as our joint training and access programme with Atos and our ongoing project work with Rolls-Royce.

I also talked about our future plans to develop quantum software that can take advantage of both quantum computing and HPC.

One of the most interesting developments in HPC this year was how far ARM CPUs have come. Riken and Fujitsu’s Fugaku is one of the major success stories, with the first deployment of the new SVE (Scalable Vector Extensions) instructions. Fujitsu announced that Cray will be bringing their ARM CPUs to the rest of the world. NVidia also announced that their GPGPUs will be supported on ARM platforms, with a number of ARM CPUs listed as supported on release. I am looking forward to the increased competition in the hardware space turns out, especially with AMD’s Rome CPUs and Intel’s Xe GPUs. The future of HPC looks to be very interesting and it’s an exciting time to be involved.”

I couldn’t have said it better myself!

Caught in the data loop?

Fresh from the Open Data Institute (ODI) Summit 2019 and bursting with questions, Holly Halford, Science and Business Engagement Manager for the STFC Hartree Centre, explores the use of personal data for online marketing and asks: how do we stop ourselves getting stuck in the data loop?

So, your friend is getting married. You post a few harmless pictures on Instagram, throwing in a few #wedding tags for good measure. The next day, you’re scrolling through your social media feeds and perusing news sites only to find that every sponsored post, every inch of ad space is now trying to sell you wedding dresses. Wedding venues. Wedding fayres. Decorative wedding trees. Things you didn’t even know existed – all useless to you and, presumably, the advertiser – but the ads are still there, taking up precious mindshare.

But you asked for this – you were the one who carelessly hashtagged your way into the echo chamber… right?

From targeted advertising to political persuasion, whether to help or hinder us, our personal data is being used on a daily basis to effect changes in our behaviour. From the extra purchase you didn’t really need to make, to the life milestones you are forced to start thinking about because your data fits a certain demographic.

New research, conducted by the ODI and YouGov and published to coincide with the recent ODI Summit 2019, concluded that nearly 9 in 10 people (87%) feel it is important that organisations they interact with use data about them ethically – but ethical means different things in different contexts to different people. In discussion at the conference, Prof. Nigel Shadbolt and Sir Tim Berners-Lee highlighted that research shows people are reasonably accepting of personal data being used for targeted advertising, but less amenable to it being used for political advertising. Tim proposed a possible reasoning for this, positioning himself as in favour of targeted commercial advertising – at least towards himself – as it generally helps to find the things you want faster, and also helps companies to make the sales that keep them in business. A “win-win” for both consumer and economy, then.

Sir Tim Berners-Lee in conversation with Professor Nigel Shadbolt and Zoe Kleinman at the ODI Summit 2019.

He suggested that political advertising is different in nature because it may make people act in a way that isn’t truly in their own personal best interest due to a manipulation or misrepresentation of information. It’s of course, possible to argue that the same can be true of misleading commercial advertising but the potential impacts are almost always limited to being purely financial – spending money you didn’t need to, getting into debt etc – and these ramifications are not significantly different to the pitfalls of marketing via any other route. Traditional print media, billboards or television advertising have all probably promised you a better life at some point, if you just buy that car, that smartphone or that deodorant.

Tim has a point – targeted advertising can be useful and makes some logical sense, especially if we have actively searched for related terms or shown our interest in a certain product or service by interacting with content related to it. Despite how 1984 it can feel sometimes, I’m actually personally much more comfortable with data-driven advertising based around our active behaviors as opposed to the other option – the demographic based approach, which I feel has the potential to be far more insidious.

There’s a beauty product advert in my Facebook feed. If I click on the “why am I seeing this” feature, I am quickly informed that Company X “is trying to reach females aged 25 to 54”. Whilst the transparency is a welcome change, it doesn’t fill me with hope that a significant proportion of the media thrust upon us each day is tailored based on nothing more than gender or other divisive demographics. I often wonder how many men have beauty product adverts showing up in their feeds compared to say… cars, watches, sporting equipment? (I unscientifically and anecdotally tested this theory on a colleague recently, a man in a similar age bracket to myself. He reported an unusually high capacity of DIY ads.)

Credit: Death To Stock

The data bias is there, entrenched in historic trends that have potentially damaging consequences in the perpetuation of gender stereotypes and more – if your demographic fits the initial (and undoubtedly biased) statistical trend, do we now, via data-driven marketing, perpetuate it for all eternity?

But how do we address the very fundamentals of marketing and communications without perpetuating stereotypes and pushing conformity to social norms? As a marketing and communications professional, I confess that the commonly used concept of developing “personas” to describe your target audience and help articulate your message more clearly to them has never sat well with me, because those personas by nature are based on stereotypes and assumptions. Knowing your audience is an absolutely crucial pillar of marketing, but if you only ever acknowledge an existing or expected audience, how do you access new markets and prevent alienating potential customers outside of that bracket? Not to mention the ethical concerns this approach flags up. We need to take a more creative approach to get messages heard without excluding anyone. It may not be the easiest route but I’m certain that it is possible, more ethical and when executed successfully, more effective.

So, what can we, as consumers, do to prevent trapping ourselves with our own hashtags and search terms? The current options seem fairly lacking. Perhaps we can turn to AI-driven discovery of “things you might enjoy”. Features like this can be found on most common media platforms, with varying degrees of success. But as the algorithms get more accurate, the tighter the loop closes. As Tim purported, the intention is to be helpful and save us time – if only to provide a good user experience that keeps you invested in using the platform, of course – but everything it suggests will be based on existing tastes and activity. If you’re predisposed to playing Irish folk music, good luck getting Spotify to suggest you might have an undiscovered a passion for post-progressive rock.

Credit: Death To Stock

This presents a bigger problem when considering the landscape of opinions, causes and politics. The idea of social media curating our own personal echo chambers and arenas of confirmation bias is not a new one. It’s true that we can subscribe to contrasting interest groups, a tactic some journalists have been using – but how many of us have the patience to subject ourselves to a cacophony of largely irrelevant content, if it’s not a professional requirement? A more pressing question is: if we don’t interact positively (or at all) with that “alternate” content, does another algorithm begin to de-prioritise it until we no longer see it anyway and we’re back where we started?

Is the answer in a change of algorithms, then? The tactic of ignoring trends and demographics seems to be entirely at odds with the notion of creating better, more accurate AI algorithms and data-driven technologies. Whether we like it or not, they are meant to do exactly that – generate accurate predictions based on statistically evidenced trends and demographics. I feel quite strongly that a great deal more creative thought is required to ensure that ethical practices and regulations are instigated in line with the pace of technological advancement, and prevent data-driven marketing from driving us round in circles for the foreseeable future.

Afterword: I wrote the majority of this blog post before the launch of the Contract for the Web recently announced by Sir Tim Berners-Lee. It presents an encouraging and much needed first step towards safeguarding all the opportunities the internet presents and championing fairness, safety and empowerment. Now, let’s act on it.

Better Software, Bigger Impact

Since the term was first coined in 2012 , Research Software Engineering has experienced a rapid growth, first in the UK and then overseas.  Today there are at least 20 RSE groups at Universities and Research Institutes across the UK alone, alongside thousands of self-identifying RSEs, numerous national RSE associations, and since earlier this year, a registered Society of Research Software Engineering* to promote the role of RSEs in supporting research. 

The core proposition of RSEs is “Better Software, Better Research” – by improving the quality of software developed by researchers, we enable higher quality research.  Software quality is a broad topic, but the most common benefits of academic RSEs are:

  • improved reliability – fewer software errors leading to incorrect results
  • better performance – enabling more accurate and/or bigger science
  • reproducibility – increasing confidence in scientific results.

Since early 2018 the Hartree Centre has been building up an RSE capability of its own, but for slightly different reasons.  Rather than being measured on research output, Hartree Centre’s mission is to create economic impact through the application of HPC, data analytics and AI.  Most often this means taking existing research software, and applying it to solve industrial challenges.  One of the key challenges we have is crossing the “valley of death” from a proof-of-concept, where we demonstrate that a given tool, algorithm or method can in principle be used to solve a problem, to actual industry adoption of this approach.  While reliability and performance are still important here, often the key issues for a company adopting new software are usability, portability and security.

In practice, while our RSE team shares many skills in common with academic RSEs – such as employing best practices for use of version control, code review and automated testing – we specialise in areas like building simple User Interfaces for complex software, automating workflows involving HPC and deploying web applications securely to the cloud ready for industry use. 

Introducing some members of the Hartree Centre RSE team.

Our team has grown to 14 staff, comprising a range of roles from Degree Apprentices, RSEs with specialisms in HPC, AI and data analytics, to Full Stack Developers and a Software Architect. 

Just like academic RSEs, we’re at our best when working in collaboration, whether that’s with the other technology teams across the Hartree Centre, commercial clients, or our technology partners like IBM Research. 

Some of the projects we’ve been working on recently include:

We’re still recruiting – if you want be part of the Hartree RSE journey please apply here, we’d love to hear from you!

*Full disclosure: I’m a founding trustee of the society.

Meeting the Women of Silicon Roundabout – present and future!

Aiman Shaikh, one of our Research Software Engineers recently attended Women of Silicon Roundabout 2019 – one of the largest gatherings of female technologists in Europe – held at ExCeL London. In this blog post, Aiman tells us more about her motivations for attending the two day event aiming to make an impact on the gender gap and boost careers of attendees.

My main motivation for attending the conference was the opportunity to be among 6,000 attendees who were all like me: eager to connect, learn and take action on gender diversity and inclusion in the workplace. Women of Silicon Roundabout 2019 brought together a programme of inspirational keynotes, panel discussions, networking opportunities, technical classes, and career development workshops – it was the first and only conference I have attended where female technical speakers took centre stage.

For me, the chance to hear from inspirational leaders – many of whom were women – about emerging technologies like artificial intelligence (AI), data analytics, blockchain and cloud computing. This coupled with the strong messages throughout the conference about the importance of diversity and inclusion was truly incredible.

Over 6,000 delegates attended the two day event at ExCeL London.
Image credit: Women of Silicon Roundabout.

One of the many worthwhile sessions I attended was from Denise Jones, Senior Product Manager, LetGo. Denise discussed whether AI has given rise to new and distinctive ethical issues and she challenged the group with statements like “algorithms can predict user preference based on previous activity and based on other users who are like them” raising important questions about how we as technologists can be mindful of bias in our work with AI. It made me really consider the balance of collecting data to provide a better user experience and product personalisation as good thing but collecting too much data and over-targeting audiences can go wrong and be frustrating for users if it’s not relevant.

I also attended the “Confident Speaking for Women” workshop led by Sarah Palmer, Director of European Business Development at PowerSpeaking. This was an incredibly useful 60 minutes packed full of exercises specifically designed to improve presentation skills. It gave loads of helpful tips for ‘presentation newbies’ like myself such as the importance of trying things out in advance and how to project confidence and credibility, especially through using effective nonverbal language. I’m looking forward to implementing several of these strategies in my own conference talks!

Another real highlight of the conference was the Women of Colour networking lunch on the second day of the event. Organised by Google, it was a chance to ‘inspire and be inspired.’ I was fortunate enough to meet with so many role models in tech and find out from them how they progressed in their career, how they managed their work/life balance and grow my own professional networks. I was also lucky to be able to meet with groups of fantastic early career women who were keen to find out more about my job and the Hartree Centre. I really enjoyed telling them more about my role and day to day life as part of the Research Software Engineering team – I hope to see some of them apply for some of our job vacancies as they would be great assets to any team!

Aiman Shaikh | Research Software Engineer | Hartree Centre
Image credit: STFC

I loved this conference – it provided a much-needed, necessary platform to women in technology, inspiring attendees to talk and network with women working across different industries and using a variety of emerging technologies in their day to day jobs. I’ll certainly be taking many of the lessons learned back to the Hartree Centre – it has inspired me to think about AI and data analytics in some of my upcoming projects and how I can make sure I continue to incorporate diversity and inclusion in to my work and professional networks.

From a Computing GCSE to being Deputy Director

“Life is like a large pond, you are surrounded by lilypads and depending on your capabilities and circumstances you have to pick the next one to step onto.”

When I was younger, growing up in Wigan I was mainly interested in three things: football, computers and radio control cars. At school, I decided to study A Levels in maths, physics and chemistry and then went off to study chemistry at the University of Leeds with no fixed idea of what I wanted to do or where I was going afterwards.

After a period of unemployment, I was lucky enough to get a job as a Research Chemist with Crosfield, a Unilever company at the time. This involved working with Crosfield silica to remove protein from beer, essentially increasing the shelf-life of the product. To me, this was great, I was a beer scientist at the age of 21! I enjoyed the challenge of working on new formulations and eventually discovered a way of improving the shelf-life of beer using 50-70% less material than previous methods. At first, the brewers we worked with did not seem to buy in to the idea so the sales staff invited me out with them to explain the process to our customers. That was my first taste of sales and I really enjoyed it so I started to try to go out with the sales team as much as I could.

My next ‘career leap’ was in to telesales and this turned out to be a terrible idea as it really did not suit the way I liked to work and how I liked to develop customer relationships and insight. From there, I went to work for Dionex in a regional sales role with a remit for selling chromatography columns that separate chemical components. It was this position that helped me to recognise that I was actually quite good at sales and learned an important point:

“people do not just buy kit, they buy answers to the problems they want to solve.”

This led me back to my interest in computing where I taught myself how to use a macro-based scripting process that increased the efficiency of the sales process, helping me to match solutions to customer problems.

Continue reading “From a Computing GCSE to being Deputy Director”

Bringing big data to life | TechUK’s Big Data in Action Roadshow comes to Manchester

http://www.dreamstime.com/royalty-free-stock-images-information-exchange-image9317559
http://www.dreamstime.com/royalty-free-stock-images-information-exchange-image9317559

Last week, the Hartree Centre sponsored TechUK’s Big Data in Action Roadshow in Manchester, held as part of a series of events across the UK to demonstrate the tools and technologies available for businesses to use, explore and get value from their data. Read on to find out how the day unfolded. Continue reading “Bringing big data to life | TechUK’s Big Data in Action Roadshow comes to Manchester”

Through the gears: boosting car industry competitiveness

 

The visualisation facilities at the Hartree Centre have been used to help car manufacturers reduce time and money from their innovation processes

Now that the summer break is pretty much over (what was that I hear some of you shout?), I thought it was time for us to publish another post on here. In this post I touch a little on the automotive industry.

The automotive industry is one of those sectors that countries tend to use as a barometer of their overall industrial and economic performance. In the UK, the sector enjoyed a pretty buoyant 2015 all things considered. Continue reading “Through the gears: boosting car industry competitiveness”

Creating a cognitive eco-system – day one of the Hartree Hack

Twitter Learn create compete

This week sees the Hartree Centre run its first hackathon event at Daresbury. A three day event which brings together developers, designers and companies from a range of sectors all with the aim of creating the next big thing in web or mobile-based applications using IBM Watson APIs.

Duncan Sime introducing the Hartree Centre at the Hartree Hack
Duncan Sime introducing the Hartree Centre at the Hartree Hack

I’m a football fan. A Manchester United supporter for 40 odd years. The football was so poor last night (although we did win 3-1, I prefer a match that lifts you out of your seat), that I ended up having the “what are you doing tomorrow?” conversation with my significant other mid-way through the first half of the match.  Continue reading “Creating a cognitive eco-system – day one of the Hartree Hack”