Sparse Autoencoders Are the SR 11-7 Tool I Wish I'd Had
5 min read
How mechanistic interpretability tools turn opaque LLMs into auditable systems regulated banks can actually deploy under existing model risk frameworks.
Notes on AI, credit risk, and the physics of complex systems.
I'm Head of AI at 2OS, a credit-risk consulting firm working with top US banks and fintechs. Before that, I did a PhD in theoretical physics at UVA on topological phases of matter.
I write about interpretable ML for regulated lending, generative AI in banking, and the physics of complex systems. More about me →
How mechanistic interpretability tools turn opaque LLMs into auditable systems regulated banks can actually deploy under existing model risk frameworks.