About

I build AI systems that stress-test arguments and surface counter-arguments. Based in London.
I'm especially interested in the intersection of epistemology and AI — how we can build systems that don't just give answers, but show their work and surface the strongest counter-arguments.
Currently: Head of AI at Alex Epstein.
Background
I've been building AI systems for the past few years — knowledge bases, research pipelines, and tools that help people think through complex questions. Before that, I built and sold a startup (EventIgnite), which gave me an appreciation for building things people actually want to use.
The thread through my work is verification and stress-testing. Whether it's building AI that cites its sources, debate systems that surface counter-arguments, or research tools that show their work — the goal is always to give people confidence that claims have been properly examined.
What I Think About
Truth-Seeking AI
Verification is what makes the 10x-100x productivity gains in software engineering possible — agents can keep running, check their own work, and iterate. I'm building the equivalent for intellectual work: AI that stress-tests arguments, surfaces counter-arguments, and provides assurance that claims have been examined.
AI Capabilities and Limits
AI optimizes within a given context. It finds better tactics, sharper evidence, more effective phrasing. But it doesn't step back and ask: "Am I arguing the wrong thing entirely?" Humans break out of context. AI executes within it. Understanding this distinction is key to using AI well.
The 100x Public Intellectual
AI can now do most of what used to require a research team. The bottleneck shifts to clarity — knowing what questions to ask, what arguments to stress-test, what's worth pursuing. The people (and AIs) who provide that clarity will have outsized impact.
Connect
I'm always interested in talking to people working on similar problems. Reach out on Substack, X, or LinkedIn.