From a Login Module to a Protocol
Back in November I wanted to build a Nuxt module for WebAuthn passkeys. Today OpenApe is 10 npm packages, 2 Nuxt modules, a protocol spec, and a desktop app. A story about problems that hide other problems.
I wanted to build a login module.
That was back in November. A Nuxt module that lets web apps support WebAuthn passkeys instead of passwords. Small, focused, one job. One weekend, then done.
That was the plan, anyway.
Problem 1: Who is the identity provider?
The moment you implement WebAuthn, you stumble over an unassuming question: how does a service provider know which identity provider to talk to?
With Auth0 or Clerk the answer is easy: you registered with the provider, the provider is hardcoded. But the moment you think decentralized, the resolution is missing. A user types in their email address — and then?
The answer has been sitting around since 1983: DNS. Every email address has a domain. Every domain can have TXT records. So I built DNS discovery: a TXT record at _ddisa.example.com points to the responsible identity provider. An email address is enough — DNS does the rest.
The login module became two: @openape/core for DNS resolution, @openape/auth for the actual WebAuthn flow.
Problem 2: But how does an agent even authenticate?
Around the same time, my own projects started using AI agents seriously. And before I could even think about permissions, there was a much more basic question: passkeys need a finger. Agents don't have fingers. How is the thing supposed to sign in at all?
So I extended the auth flow with a second path: Ed25519 challenge-response, essentially like SSH keys. The agent holds a private key, the IdP holds the public key, the IdP issues a challenge, the agent signs. Same pattern as WebAuthn — just without the browser and without a human in the loop.
And something nice happens: at the protocol level, the distinction between human and agent disappears. Both have an identity at the IdP. Both authenticate via the same scheme. Both can use the same CLI (apes). The only difference: the human has a passkey on a laptop, the agent has an Ed25519 key on disk.
Problem 3: OK it's signed in — what is it actually allowed to do?
The agent can log in. Fine. But can it just do anything now? Of course not. I want granular control — read this email yes, delete that one no, review this code yes, merge it directly no.
OAuth would be the obvious answer for authorization. But OAuth was designed for humans — authorization code flow, browser redirect, the whole dance. For a background agent it feels wrong.
So I built grants. A grant is a pre-approved JWT that allows an agent to perform a specific action. Granular, time-limited, revocable at any time. I approve once with my passkey — the agent can then act without further interaction, but only within the scope it was given.
Two packages became four: @openape/grants joined the family, then @openape/proxy as an HTTP gateway for agents.
Problem 4: How does an agent sign in on my behalf?
Grants answered "the agent is allowed to do exactly this one action." But what if an agent needs to work at a service like me, for hours or days? Not a grant per call, but a session — under my identity.
Grants couldn't express that. A grant is a ticket for a single call, not a login. So it became its own protocol building block: delegation. I authorize my agent once to sign in to a service on my behalf — and it then runs with its own session, as me, but with a clear audit trail: "This wasn't Patrick himself, this was Agent X on Patrick's behalf."
Built on top of RFC 8693 (Token Exchange) with the act claim from OAuth 2.0. Standards where possible, custom extensions where necessary.
Problem 5: Where does the key material live?
Up to that point, everything ran in the browser or on the server. But for an identity platform that's not enough. Keys belong on the user's device, not in some server-side storage. So a desktop app joined the picture — Tauri v2, Vue 3 frontend, Rust backend. Plus a Rust CLI for power users and server setups.
The desktop app picked up another job along the way: it orchestrates AI agents as isolated OS users. Each agent runs in its own user account, with its own permissions, in its own environment. That's no longer "an identity module" — that's infrastructure for the next generation of AI-driven workflows.
Problem 6: How do you write this down?
At some point I had 10 packages, 2 Nuxt modules, 6 apps, a desktop app, a CLI, and no specification. A protocol needs a specification, though — otherwise it's just code. So I started writing DDISA: DNS-Discoverable Identity & Service Authorization.
Three documents:
- Core: DNS discovery, OIDC extensions, WebAuthn and Ed25519 auth flows, token format
- Grants: Grant-based authorization REST API, AuthZ-JWT, polling model
- Delegation: Delegation protocol on top of RFC 8693
Plus JSON Schemas (Draft 2020-12) for every data format, plus complete HTTP examples for every flow. Compliance levels for implementations: Core, Core+Grants, Core+Grants+Delegation.
Where this stands today
Today, on April 9, 2026, OpenApe looks like this:
| Component | Description |
|---|---|
| Protocol | 3 specs (Core, Grants, Delegation), JSON Schemas, full examples |
| Monorepo | 10 npm packages, 2 Nuxt modules, 6 deployed apps |
| Desktop App | Tauri v2, orchestrates AI agents as isolated OS users |
| CLI | apes for grant management, ape-shell as a grant-secured shell |
| Free IdP | Hosted identity provider, free to use |
This morning I committed ape-shell — a shell replacement that pipes every command through the grant system. ape-shell -c "git status" requests a grant that's valid for the session. Subsequent commands reuse it. Zero-latency re-execution with human control.
Back in November I wanted to build a login module.
What I learned
The most important projects don't get drawn on a whiteboard. They emerge when you solve a concrete problem and notice the problem was hiding another problem. And the next one. And the one after that.
If back in November I had made "a plan for a decentralized identity protocol," I would have overwhelmed myself. Instead I built a login module. That worked. Then the next piece. That worked too. And so on, until the result was bigger than the original plan.
The only reason it worked: every step was small and concrete on its own. The vision only emerged in hindsight. It wasn't a starting point, it was a consequence.
That's maybe the part of "building in public" that's actually valuable: not the public showing, but the public admission that you didn't know from day one what you were building. You found out by building. At least that's how it was for me.
OpenApe is open source. The protocol spec, the code, the apps — all on github.com/openape-ai. If you're interested in decentralized identity, AI agent authorization, or just in interesting protocol design: take a look, open issues, write to me.
What was the last project that grew beyond what you planned?