When people ask how long I expect this to take, I tell them ten to twenty years. The response is usually some version of surprise — not at the ambition, but at the candor. Most founders give a more compressed answer, not because they believe it, but because a longer one requires a different kind of explanation.
I want to offer that explanation here, because the timeline is not incidental to what I am building. It is a load-bearing part of the architecture.
Why the timeline shapes everything
The problems I am working on — building AI systems that structurally enforce human agency, and understanding how civic pressure actually functions in practice — do not have short solutions. Not because they are impossibly hard, but because they are the kind of problems that require patient, compounding effort to address well. They involve institutions, habits, trust, and infrastructure, all of which change slowly and resist shortcuts.
If I were optimizing for a five-year outcome, I would build differently. I would cut architectural decisions that add complexity without near-term return. I would defer the hard questions about the intelligence layer, the ledger, the mutation path — things that matter enormously in year ten and are invisible in year two. I would make the system look impressive before making it sound.
The long timeline is not a concession. It is a constraint that keeps me honest. Every decision I make gets pressure-tested against a harder question: will this still make sense in fifteen years, when the system has accumulated real usage, real history, real users who depend on it? A lot of convenient choices fail that test. The long timeline surfaces those failures early, when the cost of correcting them is just thinking clearly rather than rewriting infrastructure.
What it means to build alone
Solo, in this context, does not mean I intend to remain the only person working on this indefinitely. It means that the foundational decisions — the doctrine, the architecture, the governing method — are mine. FINITE is the personal entity. It governs ARCHONFRAME. That hierarchy is explicit and intentional.
Long-horizon work requires a single point of conviction at the center. Not because collaboration is a problem, but because certain kinds of questions do not resolve well by committee. What is this system ultimately for? What will we refuse to compromise on when the pressure to do so is high? What does it mean for human agency to be structural rather than optional? These are questions that need a consistent answer across years, not a negotiated answer across meetings.
That does not mean the answers are fixed forever. Doctrine can evolve — mine has, and it will again. But there is a difference between doctrine that evolves through reflection and doctrine that erodes through accumulated compromise. The solo structure is partly a safeguard against the second kind of change.
The invisible phase
Right now, most of the work is invisible. Phase 1 is a command-line interface — no public launch, no users, no product in any sense that most people would recognize. It is the governed engine, the ledger, the knowledge substrate, the intelligence loop. None of that is visible. All of it is load-bearing.
There is a version of this work that would be easier to explain and harder to stand behind. I am trying to do the opposite.
I am aware of what this costs. Long stretches without external validation are genuinely difficult. There is no funding milestone to signal you are on the right track, no team meeting where someone else confirms the direction makes sense, no user growth chart to consult when you are uncertain. What there is, instead, is the architecture itself — and whether it holds up under scrutiny, whether the decisions compound in the right direction, whether the system being built is the one that actually needed to be built.
The case for this kind of work
I am not arguing that every founder should operate this way. Most products do not require a twenty-year horizon. Most problems are better served by moving quickly, iterating on user feedback, and compressing the timeline as much as possible.
But some problems are not like that. Some problems require you to hold a position under pressure for long enough that the position actually matters. Some infrastructure takes a decade to become what it needs to be. Some trust has to be earned over years before it means anything.
I think the work I am doing is in that category. The governed AI infrastructure I am building will only be meaningful if it is still standing — and still principled — when the questions around AI agency become genuinely urgent for a lot of people. The civic work will only matter if it produces tools that survive contact with reality over an extended period, not just tools that seem useful at launch.
Both of those things require staying in it for a long time. That is the choice I have made, and the timeline I am building toward.