Newcastle University is offering one fully funded EPSRC PhD studentship under its Doctoral Landscape Awards (DLA) portfolio. The project focuses on building a real-time AI safety validation framework using high-fidelity digital twins for critical infrastructure systems. (Newcastle University)
Dynamic Validation of AI Systems in Digital Twins: A Real-Time Safety Framework for Critical Infrastructure Resilience (DLA2631). This opportunity is open to UK students, EU students, and International students, making it a strong option for applicants worldwide who want to work at the intersection of AI safety, cyber-physical systems, resilience engineering, and regulatory compliance.
Author: Dr Niaz Chowdhury (LinkedIn)
Designation: Lecturer (Computer Science)
Affiliation: Ulster University (Birmingham), UK
Funding and award details
Award summary (fully funded):
- 100% tuition fees covered
- Minimum tax-free annual stipend of £20,780 (UKRI 2025/26 rate)
- Additional project costs provided
Sponsor: EPSRC
University: Newcastle University
Number of awards: 1
Duration: 4 years
Start date: 1 October 2026
Application deadline: 15 February 2026
Project code: DLA2631
Why this research matters
Critical infrastructure—such as power grids, transport networks, and water treatment plants—increasingly relies on AI-driven decision-making to improve efficiency and autonomy. But these systems operate in environments where real-world conditions rarely match lab assumptions.
Weather disruptions, cyber incidents, sensor faults, and equipment degradation can cause AI systems to behave in unexpected ways. In a connected infrastructure ecosystem, even a single error can cascade into larger failures, leading to serious safety and economic consequences.
Digital twins are already widely used for predictive maintenance and optimisation, but many current approaches do not provide a robust framework for continuous, real-time verification of AI safety during operations. This PhD aims to close that gap.
Project overview
This PhD will develop a dynamic validation framework that uses high-fidelity digital twins to continuously stress-test AI behaviours under realistic and adversarial conditions.
The core idea is to move from “AI tested before deployment” to “AI continuously validated during operation”, by running real-time simulations of edge cases, such as:
- cyber-physical attacks
- sensor failures and noise
- equipment degradation
- other rare but high-impact operational scenarios
Key research challenges
The project is motivated by four major challenges in deploying AI safely in critical infrastructure contexts:
1) Dynamic environments
Real-world conditions are unpredictable. Environmental variation and cyber threats can push AI systems beyond the conditions they were validated under.
2) Cascading failures
Infrastructure systems are interconnected. Poor AI decisions can propagate across components and networks, increasing the risk of widespread disruption.
3) Regulatory lag
Many existing safety certification approaches (including standards-based models) are not designed for continuously adapting AI systems, creating a gap between static compliance and dynamic risk.
4) Verification gaps in current digital twins
Digital twins often support optimisation and maintenance but lack mechanisms to continuously verify AI safety and alignment in real-time operational contexts.
What the project aims to deliver
The proposed framework aims to provide a comprehensive approach by:
- designing resilience metrics to quantify AI safety (e.g., robustness, recoverability, ethical compliance)
- bridging digital twin simulations with physical systems via real-time monitoring for pre-emptive risk mitigation
- embedding regulatory rules (e.g., EU AI Act-aligned governance thinking) into the digital twin environment to support fairness and transparency auditing
Training and supervision environment
You will receive training in:
- physics-informed digital twin development and critical infrastructure simulation
- formal verification methods, including probabilistic model checking
- AI safety compliance, including regulatory expectations relevant to modern AI governance
Supervisory team
Ideal applicant profile
Newcastle is looking for a candidate with a strong foundation in computer science or systems engineering. The following are particularly advantageous:
- knowledge of AI/ML algorithms
- familiarity with simulation environments
- interest in critical infrastructure resilience, cyber-physical systems, and AI safety ethics
- ability to think analytically about safety certification and regulatory compliance
Eligibility and how to apply
This studentship is advertised as funded for:
- UK students
- EU students
- International students
For the full eligibility criteria and application steps, you should apply via Newcastle University’s EPSRC Doctoral Landscape Awards project listings (look for DLA2631).
Contact details
Project contact:
- Dr Yinhao Li (listed on the project advert)
Independent application advice:
Important dates (save these)
- Application deadline: 15 February 2026
- Start date: 1 October 2026
- Duration: 4 years


