POST /export/dump kicks off a background job, returns { dumpId, statusUrl } immediatelyGET /export/dump/:dumpId to poll status/progressGET /export/dump/:dumpId/download streams the completed dump file from R2GET /export/dump (sync path) is unchanged — still works for small databasesThe current endpoint loads the whole database into memory and has to finish inside the 30-second Workers limit. That’s fine for small databases but breaks for anything approaching the DO storage limit. The async path runs the export in chunks via DO alarms that chain themselves until done, then writes the result to R2 for streaming download.
Full flow demonstrated:
POST /export/dump — starts async export, returns dumpId + statusUrlGET /export/dump/:id — poll status (shows complete with progress)GET /export/dump/:id/download — stream valid .sql file from R2HTTP 401Client Worker (Hono) Durable Object R2
│ │ │ │
├─POST /export/dump────────►│ │ │
│ ├─startAsyncDump()─────────►│ │
│ │ ├─initiateDump() │
│ │ │ enumerate tables │
│ │ ├─createMultipartUpload─►│
│◄──202 { dumpId }─────────┤ │ │
│ │ alarm ├─processDumpChunk() │
│ │ │ read rows in batches │
│ │ │ buffer → flush part──►│
│ │ │ if time < 20s: loop │
│ │ │ else: save checkpoint │
│ │ alarm ├─continue from checkpoint
│ │ │ ... │
│ │ ├─complete multipart───►│
│ │ │ │
├─GET /export/dump/:id─────►│──getAsyncDumpStatus()───►│ │
│◄──{ status: "complete" }──┤ │ │
│ │ │ │
├─GET .../download─────────►│──streamDumpDownload()───►│ │
│ │ ├─get object───────────►│
│◄──stream .sql file────────┤◄──────────────────────────┤◄──ReadableStream──────┤
Key design decisions:
pnpm test --run src/export/dump-async.test.ts
/claim #59
Bcornish
@bcornish1797
Outerbase (YC W23)
@outerbase