Specialist research areas at the Risk Institute

The Risk Institute's research spans a huge range of disciplines and expertise. See the (still expanding) list of research areas covered by the Institute and, for each area, explore snapshots of projects, people, events and more

Research Areas

The Risk Institute has a diverse range of interests in the area of energy, in particular focusing on the future of global energy supply. Our strength lies in our diversity and broad network of collaboration with high profile organisations in the space.

The Risk Institute has particular expertise in the domain of data. Our work is highly data centric and it permeates all our research interests. Our team of data scientists work in both fundamental research and application.

The Risk Institute has a diverse range of interests in the area of energy, in particular focusing on the future of global energy supply. Our strength lies in our diversity and broad network of collaboration with high profile organisations in the space.

Three key themes govern the Risk Institute's interest in medicine, personalised medicine, enabling co-decision and informed choice, and data driven medicine.

The Smarter Mobility Network is working on a mobility decision support system developed and maintained by open-source co-creation (like Wikipedia) to advise both regional planners and individual travellers.

Research Capabilities

Reliability Analysis

Risk assessment

Policy research and development

Life cycle analysis

Human factors research

Uncertainty recognition

Probabilistic safety assessment

Verification and validation

Decision theory

Optimal design

Risk communication

Probability bounds analysis

Imprecise statistics

Process safety

Machine learning

Numerical simulation

Human algorithms and ethics

Population viability analysis

Sensitivity analysis

Robust optimisation

Block chain

Systems analysis

Systems identification

Statistical modelling

Bayesian methods

Gaussian process modelling

Linguistics and sentiment analysis

Energy

The Risk Institute has a diverse range of interests in the area of energy, in particular focusing on the future of global energy supply. Our strength lies in our diversity and broad network of collaboration with high profile organisations in the space. We are always looking to collaborate with research partners, SME’s and industrial partners. Our expertise in data science, artificial intelligence, risk and uncertainty quantification and our active contribution to the international research community in these areas allows us to facilitate industrially relevant research, accelerate the development of cutting edge technology and provide specialist consultancy and advice services.

Renewable Energy

The Risk Institute also boasts a range of research in the field of renewable energy, we have experience in wind, solar and hydro-electric power generation systems. These include assessing the impact of global climate change on in situ systems, or the robust design of future installations. In addition the Digital-Twin project aims to create a robustly-validated virtual prediction tool delivering the transformative new science required to generate digital twin technology, the project covers a number of application areas including renewable power generation.

Conventional Nuclear

The Risk Institute has built a centre of excellence in the UK in nuclear fission research. We have a diverse portfolio of expertise in this area, much of our research is focused in the deployment of modern technology to the design, monitoring, safety assessment, resilience, and decommissioning of fission devices. We are working to exploit artificial intelligence for the smart online monitoring of fission reactors, the development of an integrated nuclear digital environment for the design of future fission reactors, assessing the resilience and new vulnerabilities of conventional nuclear plants following the introduction of modern computer control and autonomous systems, human factors research and plant systems safety analysis. The Risk Institute is active in the continuing development of Probabilistic Safety Assessment (PSA) methods, in particular developing approaches for the inclusion of epistemic uncertainties into the PSA framework.

Fusion Energy

The Risk Institute works closely with Culham Centre for Fusion Energy (CCFE) on the development of UQ methods for application in the development of Thermonuclear Fusion devices. The CCFE is a world leader in fusion for energy production research, the ongoing partnership is a particularly exciting area of our research working to secure the long term future of global energy supply.

Energy projects

Digital Twin

Digital Twin Project

The aim of the project is to create a robustly-validated virtual prediction tool called a “digital twin”.

Bad Data

Resilience

This project examines the benefits that resilience engineering could offer in the context of nuclear safety systems.

Data

The Risk Institute has particular expertise in the domain of data. Our work is highly data centric and it permeates all our research interests. Our team of data scientists work in both fundamental research and application.

“Big Data” often appears in the headlines. It permeates through many high impact academic journals and research, yet in many cases having the ‘right data’ is better than having more. One could be forgiven for believing with the current media hyperbole that all data is Big Data, however here at the Risk Institute data doesn’t just mean “Big Data”, this is just one element of four.

Big Data

Big data is a term used to describe the situation of extremely large data sets, so large traditional data-processing methods are inadequate to deal with the size and complexity. Big data has become synonymous with language such as the Internet-of-Things (IoT), Artificial Intelligence (AI), Machine Learning (ML) and data analytics.

At the Risk Institute we engage in both fundamental research around the development of ML algorithms, and applied research. We are working with our partners to develop market relevant industrial applications. We have considerable experience in the development and implementation of sophisticated ML solutions in large scale operations.

Small Data

Small Data is a term used to describe the situation of having very few datum, common when thinking about rare events (such as in reliability engineering or analysing terror incidents); high dimensional and unique datum (such as in personalised medicine); or data is difficult to physically obtain, prohibitively expensive to collect, or is unrepeatable (such as witness statements to a crime - Ethnographic).

Whilst Small Data is far easier to understand and conceptualise than it’s more famous big brother (Big Data), it presents very significant, and in many cases unresolved issues.

Small Data sets have the commonality that when utilised in decision making they cause one to significantly underestimate risk (or the probability of occurrence). Evidence of this commonality is all around us, from the increased natural frequency of supposed ‘black-swans’, to the inability of medical practitioners to diagnose certain conditions.

No Data

No Data is a term used to describe the situation when we have no realisations, or datum. Far more common than one may believe in the modern world of IoT and ML.

No Data situations can arise from missing data (for example if there is a sensor failure in a system for some period, or a drive becomes partially corrupted). No Data also describes situations of latent variables, or where we are attempting to enter into some system state, environment or condition that has never been reached before (for example the development of commercial nuclear fusion reactors).

In situations of No Data we have few options, often relying on expert elicitation. Expert elicitation is commonly refereed to as the ‘educated guess’, in the absence of anything else it can be helpful in order to quantify uncertainty. Experts are however commonly wrong! They, like the general public, often significantly underestimate their uncertainty, a rather unhelpful relic of our evolution. The Risk Institute has particular interest in the effective use of expert elicitation, and actively develop methodologies to mitigate the fallacy of overconfidence in risk analysis.

Bad Data

Slightly more difficult to define than the other cases articulated above, Bad Data is really any data that naively utilised in a decision making process leads to a ‘bad decision’.

Bad Data = Bad Decisions

Even the best data sets often require some significant cleaning, and preparation, in this sense almost all data is ‘bad’, but well known and robust tools exist for handling this. We use the term to describe the more severe end of this scale, where the solution to the problem is yet to be clearly defined.

Bad Data can come from a variety of sources, such as noisy sensors, non-random sampling, surrogate measurements, disparate time scales. It can be used to refer to situations of malicious falsification, blurry ../images, censored data, multi-variate, or missing values.

Algorithms

Increasingly many of the decisions that affect us are being automated such as credit score assessment, propensity of recidivism, or screening job applications. These algorithmic decision makers are being given more power and control over our lives, the potential rewards seem to be clear, the risks however are sometimes more difficult to identify. Take for example the facebook news feed, 10 years ago it is unlikely anyone would have identified this as a significant risk. In 2016 social media targeted advertising and ‘fake news’ is widely identified as being a decisive factor in the US presidential election, and Brexit referendum.

With our experience in the development of machine learning algorithms and their application to industry scale problems, our expertise in uncertainty quantification, and mission for ethical science the Risk Institute is at the unique intersection where much of the research is required.

There is a broad range of open research questions that will be critical to achieve the humane algorithms we will need for the future.

  • How should algorithms address uncertainty?
  • Are we replacing the faceless bureaucrat with the heartless algorithm?
  • Are algorithms inherently fair?
  • How do we identify fairness in algorithms?
  • How do we operationalise our informal notions of ethics to machine executable code?
  • How can we verify algorithms?
  • How does the ‘right-of-appeal’ work in an automated decision?
  • What does interpretability mean?
  • Is interpretability the answer to transparency?
  • Is interpretability more important than performance?

Even the answer to what is a humane algorithm is an open question, is an algorithm is humane if it is user-friendly and accommodating to people, and it handles diversity and uncertainty when appropriate, and leads to fair and just results in a way that is transparent whenever possible?

Medicine

Three key themes govern the Risk Institute's interest in medicine, personalised medicine, enabling co-decision and informed choice, and data driven medicine.

Personalised Medicine

Personalised medicine is a move from the traditional ‘one size fits all’ approach to treatment and patient care. Patients are individuals, with unique histories, medication usage and symptoms, personalised medicine is a theme of modern medical provision and research which attempts to take account of this individuality. The NHS alone spends £8.8 billion on medicines per year, the WHO organisation in 2010 estimated 50% of the medicines prescribed, dispensed or sold inappropriately world wide. Inappropriate medical treatments cost healthcare providers, and have a detrimental impact on the health of patients, personalised medicine may very well be one of the answers. The Risk Institute, in collaboration with our partners are working on a number of projects in this area, such as using big data to understand the effect of drug-drug interactions, predicting antagonistic behaviour and providing tools to providers for them to better tailor treatment strategies.

Co-decision and informed choice

The Risk Institute has built a centre of excellence in the UK in nuclear fission research. We have a diverse portfolio of expertise in this area, much of our research is focused in the deployment of modern technology to the design, monitoring, safety assessment, resilience, and decommissioning of fission devices. We are working to exploit artificial intelligence for the smart online monitoring of fission reactors , the development of an integrated nuclear digital environment for the design of future fission reactors, assessing the resilience and new vulnerabilities of conventional nuclear plants following the introduction of modern computer control and autonomous systems, human factors research and plant systems safety analysis. The Risk Institute is active in the continuing development of Probabilistic Safety Assessment (PSA) methods, in particular developing approaches for the inclusion of epistemic uncertainties into the PSA framework.

Data driven medicine

With the Risk Institutes' expertise in big data, bad data, machine learning and risk analysis, in conjunction with our large network of expert practitioners and other stakeholders we are able to put the most cutting edge research methods to work for the greater good. Medicine is already a data intensive field, however this data is sometimes misunderstood, or not utilised on the front lines of medical provision. The Risk Institute has already demonstrated the benefits of formalising the use of data, using Bayesian statistics we developed an algorithm to diagnose Giant Cell Arthritis (GCA) from a patient questionnaire. GCA has historically been an extremely difficult condition to diagnose, and can require invasive tests which carry risk to patients eyesight and even mortality. Our algorithm significantly out performs even the best ophthalmology consultants.

Smarter Mobility

The Smarter Mobility Network is working on a mobility decision support system developed and maintained by open-source co-creation (like Wikipedia) to advise both regional planners and individual travellers. The system employs stochastic optimisation (stochastic programming) to identify optimal modes, schedules and routes for travel from pre-computed risk maps that account for various costs of travel including:

  • Risk of death and injury for the traveller, passengers, pedestrians and other travellers
  • environmental costs in terms of likely emissions of vehicle exhaust, NOx, hydrocarbons, particulate matter, greenhouse gases during the trip, and the attribuatable ecological impacts associated with habitat destruction and dissection from infrastructure construction and maintenance, and
  • economic costs of the trip given the route, schedule, mode, and vehicle, but also the indirect economic costs associated with traffic congestion delays, health impacts from injuries and pollution, environmental degradation, and infrastructural investment and maintenance.

The system facilitates distributed optimal decision making by leveraging stochastic optimisation techniques and blockchain accounting with strong encryption to protect personal privacy. The risk maps are created by both generic models that are developed for worldwide use and local models developed for particular regions using local expertise and regional data. Individual travellers making use of the smart phone app will create a feedback stream of data relevant for the decision engine and transportation science generally. Encouraged under a citizen science program, data streams from hospitals, insurers, police, government bodies, and other contributors will also inform the decision engine about local conditions. Network research partners will also develop local data sets and data streams, and particularly the local risk models that take account of local laws, customs, conventions within each country or region. Everyone can contribute to the data sets and models used to create risk maps, so the result is a truly co-created system.