Building a Climate Resilience Demonstrator

How can digital twins help us respond to climate change? We asked Hartree Centre Data Engineering Specialist and Technical Architect on the Climate Resilience Demonstrator project, Tom Collingwood, to tell us more. 

With all the recent storms causing damage and flooding across the UK, it couldn’t be a more relevant time to talk about climate change!

To start us off, can you explain to us what climate resilience means?

It is very topical right now with all the storms we’d had recently and it seems to be a developing issue that is worth looking at. Most people will be familiar with the idea of climate change causing all kinds of disruption – from flooding to droughts to storm damage. On the extreme end of the scale, they can pose a threat to our safety whether directly or indirectly by disrupting an essential service or system – for example, the power going out in a hospital, or emergency services losing signal on their way to an accident.

This kind of work is distinct from trying to slow/stop climate change – which is incredibly important too – instead, we’re trying to help inform the understanding around what might happen to our infrastructure if/when more severe climate events do occur, and hence how we might prioritise resilience planning around assets which are crucial to whole-of-system resilience.

This means there is huge potential in terms of damage prevention, cost savings and service reliability for immediate services like telecoms, energy, water and utilities – but these also cascade down to any industry that relies heavily on or would be affected by disruption to those services. Which is pretty much all industries!

So that’s where the Climate Resilience Demonstrator (CReDo) comes in?

The Climate Resilience Demonstrator, CReDo, is a digital twin demonstrator project to improve climate and extreme weather resilience across infrastructure networks – the first of its kind in the UK. We narrowed down our focus to look specifically at the effects of extreme flooding on the communications, power and water networks in a specific area of the UK. We have developed a prototype digital twin that takes in data from climate, water, utilities, telecoms and energy industries and applies flood impact models which predict where flooding will form in the UK, how those floods might affect the equipment they touch and how knock-on impacts spread out to the rest of the networks outside the immediate flood zones.

We wanted to demonstrate how those who own and operate infrastructure can use secure, resilient, information sharing, across sector boundaries, to mitigate the effect of flooding on network performance and service delivery to customers, so we’ve been developing reliable approaches and frameworks for secure data sharing and information management that can inform this kind of model and be scaled up.

Why is a digital twin useful when tackling the challenge of climate resilience?

Lots of niche areas of utilities and telecoms will have specific experts or teams who have been responsible for that same machine or equipment for the last 50 years and if you lose that person or team – all that operational knowledge goes with them. If you start using digital twins and connected data, you need to get that specialist information out of their head and turn it into models that can run in AI and machine learning systems, providing 24 hour access to that information.

Many people think of digital twins as operational tools streaming live data from sensors and adjusting ongoing processes accordingly, such as in a manufacturing facility. With climate change, the feedback loop we are looking at might take 100 years to complete, so in this specific use case our digital twin isn’t streaming live data and instead is operating in a way which provides resilience planners with predicted outcomes for a given set of inputs, so they can use the information when making decisions about the future networks they’re supporting. We’re only scraping the surface of what digital twins can do for climate resilience with this specific use case. Bringing operational sensor data into the mix (from river levels to real-time asset monitoring) would broaden the application out to explore current and potentially upcoming failures via predictive maintenance modelling, or branching out into other climatic effects such as wind and extreme heat to inform how we make the whole network more resilient to a variety of new challenges over the coming decades. You’re building the foundations for a digital decision support, and potentially future decision-making, assistant that always gives consistent advice to actively support the experts making vitally important decisions about our countries’ infrastructure.

“That’s the thing – if you get this kind of work right, basically no one will ever hear about it. Life goes on as normal, the power stays on, the communications don’t go down and the damage is minimal.”

Tom Collingwood – CReDo Technical Architect

How do you teach a computer to do that?

You have a structured conversation with the experts, you ask them to tell you how things might break – even in strange or temperamental ways you wouldn’t expect – and you incorporate all those cases to develop a model that provides more accurate predictions. The more you know, the more data you have to keep running through the system to refine it and make better decisions and better decisions in future.

Can you talk us through an example to illustrate what kind of scenarios you’re modelling?

One of the examples we looked at was a water pumping station. So we had to factor in variables like knowing what will break if the water reaches a specific depth because that would submerge the electronics and potentially start a fire. Or if the fuel has been stolen from a backup generator, which will mean everything switches off in an emergency – but imagine cases where there are no sensors detecting whether the fuel is still there.

Our approach means that in the short-term we look at the statistical probability and frequency of those factors to make more accurate predictions of when and how failures might occur. In the long term we’ve discovered what data would be useful so you can put the technologies in place – in this case, you’d install fuel level sensors in the tanks.

So the process goes something like this:

  • Learn from experts what variables affect potential failures or faults
  • Make a plan for which data you need to start collecting
  • Create a model that uses that data to make predictions, and provides sensible approximations where the data aren’t readily available to the system yet
  • Keep feeding in new data to refine the models over time
  • Review the outputs of the models with the experts running the machines/assets, and tweak as necessary to ensure the models give sensible outputs using the current information at hand
  • Use the predicted outputs to inform plans to mitigate the failures

So with the flooding example, you can’t stop the weather but you can predict when it’s likely to happen and put up defences in time to minimise damage or disruption?

Exactly. And the next stage is to look at what knock-on effects happen when a fault or failure occurs – so if it’s a power plant that went down, everything it supplies power to has now lost its primary power supply. What would that mean for vital infrastructure, like healthcare? This was what the short film we funded through the project was exploring – that something like loss of power – even over a short period – can actually be life or death.

The ultimate impact of a single asset going down isn’t something which is immediately apparent – we have to cascade those failures across multiple networks throughout the system if we want to understand the real impact, and with complex network interdependencies that’s not an easy thing for humans to resolve quickly, whereas the right computational models can be very well suited to doing this quickly and repeatably.

Short film “Tomorrow Today” was produced by the National Digital Twin programme and Climate Resilience Demonstrator to explore the potential impact of digital twins.

What was the Hartree Centre’s role in the project?

The Hartree Centre was brought into the consortium originally to provide leadership of technical delivery, and I was given the role of Technical Architect accordingly. This meant my job was to oversee the successful delivery of a technical plan, so I had to do a bit of planning first and then ensure we could make it happen. We also had several other members of our Data Science and Research Software Engineering teams working on different aspects of data analysis and code optimisation for the project.

I’ve had oversight of what’s being done across the consortium of project partners: STFC’s Hartree Centre and DAFNI, CMCL Innovations, the Joint Centre of Excellence for Environmental Intelligence (JCEEI), the National Digital Twin Hub and the Universities of Edinburgh, Warwick and Newcastle.

On the industry side, Anglian Water, BT and UK Power Networks provided infrastructure data and Mott MacDonald supported us with domain expertise in infrastructure and flood modelling.

That’s lots of pieces to bring together!

Yeah, it’s a massive and quite complicated stakeholder map with a lot of moving pieces! So I’ve spent a lot of the last year joining the dots and doing agile programme planning. We’ve approached it with telecoms, water and utilities providers as the “customers” we had in mind as they’re the ones who would ultimately be able to benefit from the outputs of the project and use them to increase reliability and functionality.

A bunch of very talented people were put in front of me and I had to figure out how we could deliver as much as possible simultaneously and get it all done in time for the close of the project. We set up a secure cluster on DAFNI to put data from the asset owners all in one place, so that the scientists working on the project could access it and connect it up to develop models, without it being shared or accessed by anyone else.

What are the next steps?

The project comes to a close in March 2022, so we’re currently writing up the reports and planning a webinar to present our experiences, talk about technical achievements and lessons we’ve learned along the way so that hopefully others can learn from them too and continue to develop our ideas.

The project partners are going to collate reports and write executive summaries so we have something to help us engage with business leadership audiences that are less technical but have decision-making authority to try implementing these concepts at scale. The technical reports are there in more detail so that technical staff can understand what needs to be done.

We’re also going to continue working with the partners on this project to seek funding for the continuation of development, and hopefully further scaling up of this project. Watch this space!

Find out more about the Climate Resilience Demonstrator.

Missed the show-and-tell webinar? Watch it now

Read the technical reports

What our customers think | Managing our Impact

Hear from our Head of Impact Management Karen Lee and find out about her role at the Hartree Centre and her highlights from our recent commercial outcomes survey.

Karen Lee, Head of Impact Management

Anyone working in the research and innovation eco-system will be very familiar with the concept of ‘impact’ and generating benefits to the UK economy and its people.

As an applied research centre focused on supporting industry in their adoption of transformational digital technologies such as AI and data analytics, this one word, impact, summarises what the Hartree Centre is all about. My role is to help us show it.

The recent AI Activity in UK Businesses report estimates that companies’ annual expenditure on AI technologies was over £62 billion in 2020. With the right conditions in place (for example by reducing the barriers to adoption… which is what we do), they predict that total AI expenditure could grow 11-17% annually over the next five years. Whilst the Impact of AI on Jobs report estimates that such technologies could boost our economy by as much as 10% of GDP by 2030. So we are talking pretty significant numbers here!

It’s exciting that the Hartree Centre (and therefore my talented colleagues) is one of the delivery mechanisms to achieve this. As their Head of Impact Management, my focus is on demonstrating the benefits from our portfolio of projects and programmes by helping us to measure and understand the value we contribute to the organisations we collaborate with, as well as to the wider economy and society.

One way this is done is by capturing information – via surveys and interviews – on customers’ experiences of working with us and tracking this over time. We’ve just published an independent report which has provided useful insight into our portfolio of commercial projects.


The key findings from the report are based on 31% of past user organisations engaging in commercial projects completed up to the end of January 2021.

The Commercial Beneficiary Outcomes report is interesting to me on a number of levels. First off, I want to put a shout out to all of the people that make the Hartree Centre ‘work’ and I am proud that 94% of respondents stated they already had, or would, recommend us to others. The work my colleagues do is remarkable – it is right on the cutting-edge and more often than not beyond my realm of understanding.

Back to the organisations… even if you just look at the sectors they operate in, it really does show that digital technologies pretty much touch every part of the economy.

And these businesses are not all at the same stage of their digital transformation journey either. Some are just starting out, with an idea in mind but unsure of whether it would work or where it might take their business, whilst others are optimising processes or product development. I think this is demonstrated in the range of outcomes reported, which indicate important improvements in productivity, performance, skills and R&D investment. For example:

  • 76% reported that the strategic importance of adopting and applying digital technologies had increased
  • 65% have seen an increased investment in R&D within their organisation
  • 84% have improved the extent to which their organisation uses or exploits data.
  • 89% have increased their in-house technical expertise and capabilities

Other benefits reported included:

  • Enhanced confi­dence in products and services
  • Improved effectiveness of product development
  • Optimised processes
  • Reduced product development costs
  • Reduced time to market
  • Increased sales or profi­tability
  • Enhanced reputation

Although after a project you’ll get a picture of the early outcomes and future potential, the nature of innovation is such that full benefits often take time to be realised. Our survey reflects this in that 79% of participants said that they expect to see further commercial benefits over the next 1-3 years.

Following this research, as part of the Hartree Centre’s continuous improvement work, we’ve enhanced our evidence collection so that we capture information before, after and 2-3 years after our projects to track how benefits accrue to achieve impact over time.

If you’d like to find out more, take a look at the public summary of the report or our infographic. We also have some fantastic case studies which tell a powerful story of some of our individual client projects.


Meet the team | Training and Events Manager

We spoke to Nia Alexandrova about her role at the STFC Hartree Centre, what keeps her coming into work every day and how the shift towards remote-working has changed the way events are run.

Can you introduce yourself and tell us about your role at the Hartree Centre?  
I am the Training and Events Manager, so my responsibilities involve building and designing the Hartree Centre’s training strategy and programme. This means working with the researchers in their area of applied research to design specific courses and learning materials for different audiences and entry levels. It can also involve managing the process of organising and delivering an event. Because of my background and own research, I am able to support people in finding better ways to teach depending on different audiences. Specifically, my research is in collaborative training and collaborative learning in technology-rich environments. 

So your background is in research? 
My education was in engineering but that was a very long time ago! I started as a research assistant and was involved with some programmes that were being overhauled and transitioned from Liverpool University to the University of Reading. I then went to work in Barcelona Supercomputing Centre to help define and develop their training programme. We built up a team, starting with myself and later Maria-Ribera Sancho (former Dean of FIB – Barcelona School of Informatics), to create a coordinated approach. from the ground up to create a coordinated approach. When I came back to the UK, I joined the STFC Talent Pool and this opportunity in training and events came up which was well suited to my skills! I also knew Alison Kennedy (who had just become the Director of the Hartree Centre at the time) from being acquainted with Women in HPC and she confirmed to me that this was probably a place where I would want to work! 

A woman with mid length brown hair and a mustard coloured shirt presenting in front of a large curved screen showing an image of a supercomputer.

So she was right then! What keeps you coming to work every day?  
I hate doing the same thing over and over again and working at the Hartree Centre is very exciting and quite challenging in that way – every day is different! For example with HNCDI’s EXPLAIN training programme, we are working on the challenge of enticing people to engage with training they may never have thought they needed. It’s easy to present supercomputing and AI to an academic audience but more difficult to engage with individuals or private companies and their leadership, who may not be aware of the benefits of upskilling their teams in digital transformation, computing or AI. At the moment we are in an interesting time, people are becoming aware of the need for digital transformation. 

For the Hartree Centre too, the entire time I’ve been working here we have been growing and evolving. We are finding different ways to develop things, finding the best way to support people and exploring how to teach in the best possible way. The challenge is ensuring that when you’re training individuals, you’re giving them the skills they need not just in their own job, but to go and change behaviours and attitudes to digital transformation in their own company. 

What would you say has been your biggest challenge recently? 
Until the COVID-19 pandemic, all our training has been hands-on and face-to-face in the physical Hartree Centre building. So the recent – and very sudden – transition to virtual events were initially very disruptive for us and a time for fast problem-solving! 

However now we can recognise that it was an inevitable step forward that was just accelerated by the global circumstances, and the Hartree Centre Training, Events and Communications teams worked together really well to find a way to support everyone digitally in a short timeframe. It became an ultimately positive experience that enabled us to enhance our training offering and we are continuing to explore the use of hybrid, virtual and face-to-face events and refine our approach. 

Woman wearing mustard colour shirt pointing at a computer screen and smiling.

So as an events manager, what kind of events do you like to attend? 
 Big international conferences like Supercomputing or ISC are always interesting. When you are physically attending these exhibitions, they feel enormous, we are talking about thousands and thousands of people – and that is an exciting atmosphere. I can share best practice with the global HPC training community and be part of meetings and get involved with the communities that I wouldn’t encounter locally. It helps you to see the bigger picture and also gain some exposure for your organisation. I have seen some really interesting keynotes and in recent years there was a trend of not only inviting people from  the high performance computing (HPC) industry but people who are slightly outside of it. That is a really interesting way to see how someone’s work in industry intersects with HPC outside of the HPC research community.

When you’re not at work, what do you most enjoy doing?  
I love drawing so I go to a life drawing group every Monday. I don’t like to sit and watch TV, I have to have my hands busy so I do knitting and crochet a bit. I love the Daresbury Laboratory book club, it’s a lot of fun, and I’m glad we continued it on Zoom during the pandemic. I’m grateful for living near Daresbury because it is a very beautiful area. I always knew this but during lockdown, I started to appreciate it even more because it allows you to do 5 or 10 minute walks very close to home and you get to go around and see ducks, flowers and woodlands and all the time I’m thinking if I was living in a big city I would miss this. 

You can catch up with Nia’s work by exploring our upcoming training events on the Hartree Centre websiteregister your interest for fully booked events or sign up for future updates by subscribing to the Hartree Centre newsletter 

Caught in the data loop?

Fresh from the Open Data Institute (ODI) Summit 2019 and bursting with questions, Holly Halford, Science and Business Engagement Manager for the STFC Hartree Centre, explores the use of personal data for online marketing and asks: how do we stop ourselves getting stuck in the data loop?

So, your friend is getting married. You post a few harmless pictures on Instagram, throwing in a few #wedding tags for good measure. The next day, you’re scrolling through your social media feeds and perusing news sites only to find that every sponsored post, every inch of ad space is now trying to sell you wedding dresses. Wedding venues. Wedding fayres. Decorative wedding trees. Things you didn’t even know existed – all useless to you and, presumably, the advertiser – but the ads are still there, taking up precious mindshare.

But you asked for this – you were the one who carelessly hashtagged your way into the echo chamber… right?

From targeted advertising to political persuasion, whether to help or hinder us, our personal data is being used on a daily basis to effect changes in our behaviour. From the extra purchase you didn’t really need to make, to the life milestones you are forced to start thinking about because your data fits a certain demographic.

New research, conducted by the ODI and YouGov and published to coincide with the recent ODI Summit 2019, concluded that nearly 9 in 10 people (87%) feel it is important that organisations they interact with use data about them ethically – but ethical means different things in different contexts to different people. In discussion at the conference, Prof. Nigel Shadbolt and Sir Tim Berners-Lee highlighted that research shows people are reasonably accepting of personal data being used for targeted advertising, but less amenable to it being used for political advertising. Tim proposed a possible reasoning for this, positioning himself as in favour of targeted commercial advertising – at least towards himself – as it generally helps to find the things you want faster, and also helps companies to make the sales that keep them in business. A “win-win” for both consumer and economy, then.

Sir Tim Berners-Lee in conversation with Professor Nigel Shadbolt and Zoe Kleinman at the ODI Summit 2019.

He suggested that political advertising is different in nature because it may make people act in a way that isn’t truly in their own personal best interest due to a manipulation or misrepresentation of information. It’s of course, possible to argue that the same can be true of misleading commercial advertising but the potential impacts are almost always limited to being purely financial – spending money you didn’t need to, getting into debt etc – and these ramifications are not significantly different to the pitfalls of marketing via any other route. Traditional print media, billboards or television advertising have all probably promised you a better life at some point, if you just buy that car, that smartphone or that deodorant.

Tim has a point – targeted advertising can be useful and makes some logical sense, especially if we have actively searched for related terms or shown our interest in a certain product or service by interacting with content related to it. Despite how 1984 it can feel sometimes, I’m actually personally much more comfortable with data-driven advertising based around our active behaviors as opposed to the other option – the demographic based approach, which I feel has the potential to be far more insidious.

There’s a beauty product advert in my Facebook feed. If I click on the “why am I seeing this” feature, I am quickly informed that Company X “is trying to reach females aged 25 to 54”. Whilst the transparency is a welcome change, it doesn’t fill me with hope that a significant proportion of the media thrust upon us each day is tailored based on nothing more than gender or other divisive demographics. I often wonder how many men have beauty product adverts showing up in their feeds compared to say… cars, watches, sporting equipment? (I unscientifically and anecdotally tested this theory on a colleague recently, a man in a similar age bracket to myself. He reported an unusually high capacity of DIY ads.)

Credit: Death To Stock

The data bias is there, entrenched in historic trends that have potentially damaging consequences in the perpetuation of gender stereotypes and more – if your demographic fits the initial (and undoubtedly biased) statistical trend, do we now, via data-driven marketing, perpetuate it for all eternity?

But how do we address the very fundamentals of marketing and communications without perpetuating stereotypes and pushing conformity to social norms? As a marketing and communications professional, I confess that the commonly used concept of developing “personas” to describe your target audience and help articulate your message more clearly to them has never sat well with me, because those personas by nature are based on stereotypes and assumptions. Knowing your audience is an absolutely crucial pillar of marketing, but if you only ever acknowledge an existing or expected audience, how do you access new markets and prevent alienating potential customers outside of that bracket? Not to mention the ethical concerns this approach flags up. We need to take a more creative approach to get messages heard without excluding anyone. It may not be the easiest route but I’m certain that it is possible, more ethical and when executed successfully, more effective.

So, what can we, as consumers, do to prevent trapping ourselves with our own hashtags and search terms? The current options seem fairly lacking. Perhaps we can turn to AI-driven discovery of “things you might enjoy”. Features like this can be found on most common media platforms, with varying degrees of success. But as the algorithms get more accurate, the tighter the loop closes. As Tim purported, the intention is to be helpful and save us time – if only to provide a good user experience that keeps you invested in using the platform, of course – but everything it suggests will be based on existing tastes and activity. If you’re predisposed to playing Irish folk music, good luck getting Spotify to suggest you might have an undiscovered a passion for post-progressive rock.

Credit: Death To Stock

This presents a bigger problem when considering the landscape of opinions, causes and politics. The idea of social media curating our own personal echo chambers and arenas of confirmation bias is not a new one. It’s true that we can subscribe to contrasting interest groups, a tactic some journalists have been using – but how many of us have the patience to subject ourselves to a cacophony of largely irrelevant content, if it’s not a professional requirement? A more pressing question is: if we don’t interact positively (or at all) with that “alternate” content, does another algorithm begin to de-prioritise it until we no longer see it anyway and we’re back where we started?

Is the answer in a change of algorithms, then? The tactic of ignoring trends and demographics seems to be entirely at odds with the notion of creating better, more accurate AI algorithms and data-driven technologies. Whether we like it or not, they are meant to do exactly that – generate accurate predictions based on statistically evidenced trends and demographics. I feel quite strongly that a great deal more creative thought is required to ensure that ethical practices and regulations are instigated in line with the pace of technological advancement, and prevent data-driven marketing from driving us round in circles for the foreseeable future.

Afterword: I wrote the majority of this blog post before the launch of the Contract for the Web recently announced by Sir Tim Berners-Lee. It presents an encouraging and much needed first step towards safeguarding all the opportunities the internet presents and championing fairness, safety and empowerment. Now, let’s act on it.

Better Software, Bigger Impact

Since the term was first coined in 2012 , Research Software Engineering has experienced a rapid growth, first in the UK and then overseas.  Today there are at least 20 RSE groups at Universities and Research Institutes across the UK alone, alongside thousands of self-identifying RSEs, numerous national RSE associations, and since earlier this year, a registered Society of Research Software Engineering* to promote the role of RSEs in supporting research. 

The core proposition of RSEs is “Better Software, Better Research” – by improving the quality of software developed by researchers, we enable higher quality research.  Software quality is a broad topic, but the most common benefits of academic RSEs are:

  • improved reliability – fewer software errors leading to incorrect results
  • better performance – enabling more accurate and/or bigger science
  • reproducibility – increasing confidence in scientific results.

Since early 2018 the Hartree Centre has been building up an RSE capability of its own, but for slightly different reasons.  Rather than being measured on research output, Hartree Centre’s mission is to create economic impact through the application of HPC, data analytics and AI.  Most often this means taking existing research software, and applying it to solve industrial challenges.  One of the key challenges we have is crossing the “valley of death” from a proof-of-concept, where we demonstrate that a given tool, algorithm or method can in principle be used to solve a problem, to actual industry adoption of this approach.  While reliability and performance are still important here, often the key issues for a company adopting new software are usability, portability and security.

In practice, while our RSE team shares many skills in common with academic RSEs – such as employing best practices for use of version control, code review and automated testing – we specialise in areas like building simple User Interfaces for complex software, automating workflows involving HPC and deploying web applications securely to the cloud ready for industry use. 

Introducing some members of the Hartree Centre RSE team.

Our team has grown to 14 staff, comprising a range of roles from Degree Apprentices, RSEs with specialisms in HPC, AI and data analytics, to Full Stack Developers and a Software Architect. 

Just like academic RSEs, we’re at our best when working in collaboration, whether that’s with the other technology teams across the Hartree Centre, commercial clients, or our technology partners like IBM Research. 

Some of the projects we’ve been working on recently include:

We’re still recruiting – if you want be part of the Hartree RSE journey please apply here, we’d love to hear from you!

*Full disclosure: I’m a founding trustee of the society.

International Women’s Day 2018 | Janet Lane-Claypon

To mark International Women’s Day, Hartree Centre Data Scientist, Simon Goodchild writes a blog post to celebrate the work of a pioneering epidemiologist and doctor Janet Lane-Claypon. At the time of writing the post, Simon was studying medical statistics for the first time as part of a statistical society diploma and was surprised to have not previously heard about a woman who had invented two of the key techniques he was learning about!

Janet Lane-Claypon

How do you know that your treatment actually works?

How do you know whether something in the environment may impact upon your health?

These are some of the most basic and most important questions in medicine and epidemiology. Getting good answers is vital, and nowadays there are established procedures for finding sensible answers. Several of these can be traced back to the under-recognised work of Janet Lane-Claypon in the early part of the 20th century.

Continue reading “International Women’s Day 2018 | Janet Lane-Claypon”