Status: draft Date: 2026-04-27 Tracked in: TODO.md — AI Build Helper (BYO API key)
ODS Pages has had an AI workflow since day one, but it lives entirely
outside the framework: builders copy a system prompt and their current
spec into ChatGPT or Claude, work back and forth, and paste the result
back. The asset bundle ships as
Specification/build-helper-prompt.txt
with provider-specific variants under
BuildHelpers/Claude/ and
BuildHelpers/ChatGPT/, and the React
framework wraps the copy/paste UX as a 3-step screen
(EditWithAiScreen.tsx).
Flutter has no AI surface at all today.
This split has accumulated friction now that more builders have their own API keys:
A user with an Anthropic key still has to: open the AI screen, copy the system prompt, switch to claude.ai, paste, switch back to ODS, copy the spec, switch back to claude.ai, paste, wait, copy the result, switch back to ODS, paste, save. That’s 8+ context switches for a one-line change. Having a key buys nothing.
Each “Edit with AI” round-trip is a brand-new conversation. “Now make the priority field default to medium” requires re-pasting the prompt, the spec (which may have been updated since the last round), and the new instruction. There’s no continuity for non-trivial edits.
Today the user pastes the AI’s output back into a textarea and hits
Save. If the AI hallucinated an OdsBrandng field or dropped a
matchField, the validator catches it on load — but the user has
already lost their previous spec and has to undo by hand. There’s no
diff-review step where they can see what changed before committing.
The whole “Edit with AI” surface is React-only. Flutter Quick Build exists, but nothing equivalent to “edit an existing app via AI.”
We currently maintain two prompt files (Claude and ChatGPT) but the framework only wires the prompt asset for one path. If the in-app integration locks to a single provider, builders on the other side become second-class.
Add an in-app AI Build Helper that activates when the builder has provided an API key. Two interaction modes share one provider layer:
One-shot edit. User types “add a priority field with low/med/high options”; framework sends current spec + instruction + system prompt to the configured provider; receives a proposed new spec; renders a side-by-side diff; user accepts or discards. Same as today’s copy/paste loop, just collapsed into one button + one diff.
Multi-turn chat. User opens a chat panel, has an ongoing conversation. Each AI message that proposes a spec change renders as a diff card with its own Apply / Discard buttons. Conversation history (in-memory for v1) gives the AI context for “now make the default ‘medium’” without re-pasting the spec.
Both modes go through one AiProvider abstraction with
Anthropic and OpenAI implementations from day one. The
builder picks a provider + model in the framework settings; key is
stored in SettingsStore (Flutter) / localStorage (React) for v1,
with OS-keychain integration noted as a follow-up.
The existing 3-step copy/paste flow stays as the no-key fallback: clicking “Edit with AI” without a configured key still works as it does today, plus a one-liner “have an API key? Set it once in Settings → AI to skip the copy/paste.”
Good:
Bad:
api.anthropic.com / api.openai.com is new for
the framework — both renderers’ only HTTP today is to the local
PocketBase / SQLite. Adds one egress dependency.localStorage / SettingsStore are less secure than
OS keychain. Mitigation: clearly mark v1 as “best-effort, see
Settings → AI for what’s stored where”; mask in UI; never log.Neutral:
apply_patch(pointer,
value) directly rather than returning a full spec). Cleaner for
large specs but adds plumbing on top of v1. Open question, see §5.seedData could carry real names. Default
off; document in the AI Settings panel.Phased so the provider layer + settings land first (small, well-tested), then the UI surfaces in order of impact.
src/engine/ai-provider.ts (TS) and
lib/engine/ai_provider.dart (Dart).Interface (TS, Dart mirror):
interface AiProvider {
name: 'anthropic' | 'openai'
models: Array<{ id: string; label: string }>
estimateCost(systemPrompt: string, history: Message[], user: string):
{ inputTokens: number; estimatedCostUsd: number }
sendMessage(
systemPrompt: string,
history: Message[],
user: string,
opts: { model: string; apiKey: string; signal?: AbortSignal },
): Promise<{ text: string; usage: { in: number; out: number } }>
}
AnthropicProvider (Messages API),
OpenAiProvider (Chat Completions API).fetch (TS) / package:http (Dart). No SDK deps.SettingsDialog. Flutter: equivalent
in framework settings (not per-app — the key is a user-level
concern).ods_ai_settings JSON in localStorage / SettingsStore.
v1 plaintext; OS-keychain tracked as follow-up ADR.EditWithAiScreen: textarea for instruction +
“Generate” button (disabled if no key configured).Specification/build-helper-prompt.txt + user wraps current spec
EditWithAiScreen (and possibly
also the AdminDashboard? — defer).s27_ai_provider_request_shape: for the
same (systemPrompt, history, userInput), both providers must
produce a request body that includes the system prompt, the user
message, and the configured model. Fake transport, no real
network. Same red→green→both-drivers-pass discipline as ADR-0002
scenarios.AiSettingsSection in Flutter framework settings.| Phase | Lands | Surface | Frameworks |
|---|---|---|---|
| 1 | Provider layer + tests | none (engine) | React + Flutter parallel |
| 2 | AI Settings panel | Settings | React + Flutter |
| 3 | One-shot edit | EditWithAi screen | React |
| 4 | Multi-turn chat | EditWithAi panel | React |
| 5 | Conformance + mutation | tests | both |
| 6 | Flutter mirror | EditWithAi screen | Flutter |
Estimated 4–6 sessions across both frameworks, similar to ADR-0002.