OpenAI just baked a "figure it out and don't stop until you're done" mode into its coding agent. It's called /goal, and it shipped in Codex CLI 0.128.0.

The name might not ring a bell, but the idea will. Last fall, Australian developer Geoff Huntley published what he called the "Ralph loop" — and OpenAI has now made it an official feature. while:; do cat PROMPT.md | claude-code; done — a single-line Bash trick that became a flagship feature of a big-tech coding agent in under six months. That doesn't happen often.

What Is It?

The headline addition in Codex CLI 0.128.0, released April 30, 2026, is persisted /goal workflows. Per the GitHub release notes: "app-server APIs, model tools, runtime continuation, and TUI controls for create, pause, resume, and clear" — all landed at once. This isn't just a new slash command. It's a persistent long-running goal system that survives session disconnects.

Simon Willison summed it up in one line. "OpenAI's Codex CLI coding agent adds their own version of the Ralph loop: you can now set a /goal and Codex will keep on looping until it evaluates that the goal has been completed... or the configured token budget runs out."

Under the hood, it runs on two prompt files — goals/continuation.md (a prompt that asks "did we hit the goal?" at the end of each turn) and goals/budget_limit.md (a prompt that gracefully shuts things down when the token limit is near). These two prompts get injected automatically at the end of every cycle, letting the agent decide its next move without any human input.

Why Did the Ralph Loop Suddenly Become the Standard?

Ralph is named after The Simpsons character Ralph Wiggum — "well-meaning but a little dim." The analogy holds. One task at a time. If it fails, add a "sign" to the prompt. Loop again. Geoff Huntley claims this approach let him finish a $50,000 contract for $297. There's even a case where a compiler for a new programming language called CURSED was auto-generated with LLVM.

Three core insights behind it:

  1. One task at a time
    Not multi-agent microservices — a single-process monolith. One cycle = one feature implemented.
  2. Deterministic context injection
    Inject specs and plans in the same format every turn. Huntley's key observation: "The real context window is 147–152K" — not the advertised 200K. The effective ceiling is narrower than the marketing says.
  3. Failure is your tuning signal
    When the agent goes off-track, don't rip out the system — just add a "don't do this" line to the prompt and loop again. You're the operator, not the coder.
Category Old Codex CLI (≤0.127) Codex CLI 0.128 + /goal
Task unit One developer turn One-sentence goal → autonomous multi-turn
Session interruption Context lost on disconnect Persisted — resume, pause, or clear
Exit condition Developer decides "done" Agent self-evaluation OR token limit
Operating model Copilot — assists while you're there Overnight — done while you sleep
Risk One bad line Token runaway + wasted effort accumulation

Anthropic had already been running a similar pattern in Claude Code's SDK layer since last fall, and OpenAI pulled it up into an official slash command. The inflection point is that "coding without human intervention" is no longer an experiment — it's a product surface.

Getting Started

  1. Update Codex CLI
    Run codex update or npm i -g @openai/codex@latest. Confirm you're on 0.128.0 or higher.
  2. Start with a greenfield project
    Huntley is emphatic about this — "I never use Ralph on an existing codebase." The sweet spot is 0–90% of a brand-new project.
  3. Write a one-page SPECS.md
    Feature spec + standard libraries + explicit prohibitions. This gets injected every turn, so keep it short and decisive.
  4. Enter your /goal
    Something like /goal "Build a todo CRUD API with SQLite + tests + README" — one sentence. Set your token ceiling in budget_limit.md.
  5. Watch the logs
    Don't walk away for the first two hours — monitor it. If you spot a pattern of wasted work, add one line to SPECS.md and restart.

FAQ

(Auto-rendered by FAQSection — not included in content)

Deep Dive Resources

Simon Willison — Codex CLI 0.128.0 adds /goal Primary source for the release analysis — cites Eric Traut's tweet and the official release notes simonwillison.net

Geoff Huntley — The Ralph Loop The original Ralph write-up in full — the $50K → $297 case study plus context window hacks ghuntley.com

OpenAI Codex GitHub Releases Official v0.128.0 release notes — full specification for persisted /goal workflows github.com/openai/codex

Th0rgal/open-ralph-wiggum Open implementation that runs the Ralph pattern across Codex, Cursor, and Copilot CLI github.com/Th0rgal