The long arc.
A timeline of the projects that shaped how I build: research first, then tooling, then real products.
During lockdown I went all-in on reinforcement learning, search, and neuroevolution. I reimplemented core ideas end-to-end to understand what actually makes them work — not just how to use libraries. That foundation is still how I build today.
Implemented NEAT, REINFORCE, DQN/Double/Dueling, DDPG, TD3, Minimax, MCTS, and early AlphaZero-style experiments — mostly as clean standalone codebases focused on clarity and iteration speed.
Taught week-long AI lessons, breaking down complex concepts into hands-on projects and building stronger teaching + communication skills across very different learner backgrounds.
Wrote and submitted a full research paper on reducing exploration time in locomotion. The approach: initialize a walking policy from motion-capture mimicry, then fine-tune with DDPG in simulation. It started stronger but plateaued — the key lesson was that observation design can be the real ceiling.
Built SignWave: audio/text → transcript → ASL animation. Used Whisper for speech-to-text, MediaPipe hand landmarks, a word-to-coordinate dictionary, and a Three.js visualization layer. The repo hit 77+ stars and was copied enough times that we had to flag it more than once.
Presented work on evolving virtual creatures and training a universal locomotion policy, then explored coordination in a 2v2 soccer environment (Karl Sims-inspired evolution + PPO + an AlphaZero/MCTS-style coordination layer).
Started building small, made-to-order Python libraries after a few people asked for tools to solve specific workflows. Over a few months I shipped focused utilities with solid edge-case handling and clean interfaces — the kind of tooling that holds up under real usage.
Did consulting work helping build AskBGP: a tool for network professionals and researchers to analyze and replay BGP events using RIPE NCC data. It became a real public utility with a clean UI and multiple analysis workflows.
Worked with a telecom team exploring how to connect internal data to an LLM layer for useful analysis and decision support. Researched agent-style approaches and built a Manus-like prototype to validate workflows and architecture patterns.
Started OutageHub solo, then brought in friends to scale it. The goal is a reliable real-time outage map and API for Canada. Current focus: data quality, coverage, and partnerships with stakeholders who can validate accuracy and amplify impact.
Joined G&K Software to bring in modernization work and fund OutageHub without selling equity. Focused on outreach and sales while staying close to delivery: service carve-outs, integration layers, downstream data work, and testing support.
Cofounded Do Better Foundation to investigate gaps in government execution and delivery. Starting with construction delays: pull evidence together across messy sources and surface repeatable root causes using applied AI workflows.
Looking for the next environment where I can level up in simulation, infrastructure, and high-performance engineering — and keep building systems people actually rely on.