Dispatch from the New Edge
New Tools, Same Standards
In 2024, I built something I’m genuinely proud of.
A full backtesting pipeline. Data sourcing, walk-forward optimization, thousands of parameter combinations per security, out-of-sample validation built for rigor rather than convenience. I built it without AI coding assistance, which, at the time, wasn’t good enough to meet my standards anyway. I built the full pipeline by hand, while working full-time and in grad school, and when I finally finished the visualization suite that would have made it publishable, I hit a wall I couldn’t get through.
I put it down. I didn’t come back for a long time.
When I did, the pipeline was still there. Intact, documented, ready to run. However, working with it again felt stranger than I expected.
The tooling landscape looks nothing like it did when I built that pipeline.
In 2024, I tried using AI assistants to support my workflow and found them wanting. Not dramatically, for they could handle simple tasks, but they couldn’t keep up with the kind of thinking the research required. The judgment calls, the architectural decisions, the places where domain knowledge actually matters. I went back to building by hand.
That’s changed. The tools I’m using now are legitimate thinking partners. They accelerate implementation, catch things I miss, and handle the mechanical work that used to just cost time. The delta between then and now is not incremental.
But I want to be precise about what that means, because the force multiplier framing cuts both ways.
These tools amplify what’s already there. Someone with deep domain knowledge, good architectural instincts, and a precise relationship with language will get extraordinary results. Someone without those things will get something that looks like extraordinary results for a while. The difference shows up later, in maintenance, in edge cases, in the moment when the system needs to do something slightly outside what was originally built.
I’ve thought about this in terms of language as much as code. The words you use with these tools are not interchangeable. Telling a system to treat something as a guideline produces different behavior than telling it to treat something as an axiom. One invites interpretation. The other closes it down. That distinction matters in quantitative research, where the difference between a soft constraint and a hard one can be the difference between a valid backtest and a compromised one.
The baseline I have, knowing exactly what it cost to build this infrastructure without AI assistance, is the thing that lets me measure the tools accurately. I’m not guessing at the delta. I’ve lived both sides of it.
What I’m building here runs on two tracks, and I think they’re inseparable.
The first is original research. Rigorous methodology, honest results, no cherry-picked outcomes. The kind of work that doesn’t apologize for complexity or dumb itself down for accessibility.
The second is transparency about how that research gets made. The tools, the decisions, the judgment calls, the places where AI assistance helped and the places where it didn’t. To me, that’s intellectual honesty, not just a concession to the education market.
The case for combining them is simple. AI tools are only as good as the person directing them, and the only way to demonstrate that is to show the work. Results without methodology are just claims. Methodology without results is just theory.
This is also a moment that rewards engagement over consumption, and learning over absorbing. The speed of change in this field doesn’t make research and education less important. It makes them more important. Judgment still matters in trading and investing. Curiosity still matters. The tools have gotten extraordinarily powerful, but they don’t replace the person using them. They reveal that person more clearly than ever.
I’ve been on the other side of this, building serious infrastructure alone, without the tools that exist now, and hitting a wall I couldn’t get through. Coming back to that work with a different set of tools, and a clearer sense of what I’m trying to do with them, is what makes this feel like a beginning rather than a restart.
Until next time, keep on the cutting edge, everyone.


