About Gorp Labs

Built by people who
understand the stakes.

We are a team of AI researchers, safety engineers, and domain specialists. We did not start in consumer tech and pivot to safety-critical. This is where we began.

Our mission

To make artificial intelligence a reliable tool
in the environments where the cost of
unreliability is measured in lives.

Gorp Labs was founded on a simple observation: the organisations most in need of AI are the ones least served by the AI industry. Nuclear operators, NHS Trusts, national security agencies, infrastructure managers — these are organisations where AI could deliver profound value, but where the standard playbook of move-fast-and-iterate is not just insufficient. It is dangerous.

We exist to bridge that gap. We build AI systems that these organisations can trust — not because we cut corners on safety, but because we treat safety as the foundation of the engineering problem, not a constraint to work around.

Every Gorp Labs system is designed, built, and validated to the standards of its operating environment. We are not a general AI firm that happens to work in regulated sectors. This is all we do.

How we operate

The principles we don't compromise on.

01
Honesty before helpfulness
If AI cannot solve your problem, we will tell you. If your data is not ready, we will tell you. We will not take an engagement we cannot deliver to the standard your environment demands. Our reputation is built on candour, not contract value.
02
Domain depth, not just ML breadth
Nuclear safety culture, NHS governance, defence security frameworks — we understand these environments from the inside. Every Gorp Labs team is a combination of ML expertise and genuine domain knowledge. We do not parachute in generalists.
03
Rigour as a competitive advantage
We are slower than firms that move fast and cut corners. We are more expensive than teams that skip evaluation. And our systems stay in production longer, fail less, and create less liability for the organisations that deploy them. Rigour is not a constraint. It is the product.

The team

Researchers and engineers
who have worked where it counts.

AK
Dr Amara Kone
Co-founder & CEO
Former research scientist at DeepMind's safety team. Led AI programme design for NHS England before founding Gorp Labs. DPhil in Computer Science from Oxford.
LinkedIn ↗
JR
James Reeve
Co-founder & CTO
Previously Staff ML Engineer at Palantir, embedded with UK defence and intelligence clients. Built predictive maintenance systems for EDF Energy's nuclear fleet.
LinkedIn ↗
SB
Dr Sophia Brennan
Head of Research
PhD in Bayesian ML from Cambridge. Author of seventeen peer-reviewed papers on uncertainty quantification and safety in clinical AI systems. Former UKAEA research fellow.
LinkedIn ↗
MT
Marcus Tate
Head of Engineering
Fifteen years in safety-critical software engineering across rail, aerospace, and nuclear. IEC 61508 functional safety practitioner. Expert in formal verification methods for ML systems.
LinkedIn ↗
PO
Dr Priya Okonkwo
Head of Regulatory
Former MHRA medical device assessor with deep expertise in AI as a Medical Device regulatory pathways. Advisor to the UK AI Safety Institute on high-risk AI classification.
LinkedIn ↗
LH
Lena Hoffmann
Principal ML Engineer
Specialist in domain-adaptive NLP and large language model fine-tuning for regulated-sector corpora. Previously led ML infrastructure at a FTSE 100 financial services firm.
LinkedIn ↗

Our story

From a conviction to a company.

2022
Founded in London
Amara and James leave their respective positions to found Gorp Labs, having identified a clear gap in the market for AI firms that genuinely understand safety-critical operational environments.
2023
First nuclear deployment
Gorp Labs' predictive maintenance system goes live at a UK nuclear licensed site — the first bespoke AI deployment of its kind approved under ONR guidance. Zero incidents in production to date.
2024
NHS and defence expansion
Partnership agreements with two NHS Trusts and a first engagement with a DSTL-affiliated programme. Research team doubles with the addition of Sophia and Priya.
2025
Series A & international reach
Closed a £12M Series A round to accelerate growth across European nuclear and healthcare markets. Opened second office in Edinburgh to serve Scottish public sector and energy clients.
2026
Today
Eighteen people across London and Edinburgh, operating in six sectors, with active engagements across nuclear, healthcare, national security, and critical infrastructure. Building the team for the next phase.

Accreditations & frameworks

Operating to the highest standards.

ISO/IEC 42001
AI Management
IEC 61508
Functional Safety
NHS DTAC
Compliant
ONR Guidance
Aligned
EU AI Act
Ready
Cyber Essentials
Plus

Want to work with us?

Whether you are a client with a problem or an engineer who wants to work on something that matters.