
SYS.BLOG
Codex /goal: How to Meta Prompt It For Days of Autonomous Work (2026)
Codex /goal is the biggest AI coding advancement of 2026, but it's useless without a meta-prompted mission. Here's how to make Codex work for days on a single goal.
Codex's /goal command is the most important thing to land in AI coding this year, and it isn't close. Hand it an objective and Codex will keep working through it for hours, sometimes across an entire night, without you babysitting a single turn. The problem is that the prompt you type after /goal is doing most of the work, and the prompt you'd write yourself almost never has enough in it.
The fix is letting another AI write the /goal prompt for you. Sounds silly, works absurdly well. Once you stop hand-writing the mission, Codex starts producing the kind of output that makes you recheck the diff because you don't believe it.
What Is the Codex /goal Feature?
/goal is a slash command in the Codex CLI that turns Codex into a long-horizon autonomous agent. You give it a high-level mission, and it keeps working, planning, executing, and self-correcting, until the mission is done. We're talking hours of continuous work in a single invocation. Sometimes longer.
It shipped in Codex CLI 0.128.0. If you don't see it in your slash command list, it's probably gated behind a flag. Run this in your terminal and it'll show up:
codex features enable goalsThen restart Codex and /goal will be available. This is different from a normal Codex prompt. A normal prompt runs a few turns and stops. /goal sets up a persistent objective the agent commits to, and Codex will keep iterating against that objective across many cycles of plan, act, verify, and correct.
Why Hand-Written /goal Prompts Don't Work
A short prompt fed into /goal produces results that might as well have come from a regular prompt. The whole point of /goal is the long horizon, but if your mission is vague, the agent burns its long horizon on the wrong things. I tried this for weeks. Every prompt I wrote felt detailed enough in the moment, then watching Codex run for an hour I'd realize I'd left out three constraints, two acceptance criteria, and a whole architectural assumption.
Humans are bad at writing /goal prompts because we underestimate what the agent needs to know to stay on track without us. We write like we're going to be in the loop. With /goal, you're not in the loop. The agent fills in the blanks itself, and the blanks compound.
How to Meta Prompt Codex /goal
Open a second AI session that already has context on your repo: ChatGPT with the project connected, Claude inside the codebase, or a separate Codex window in the same directory. Then ask it to write the prompt for you. Something like this:
I'm about to use Codex's /goal command. Before you write anything, look up how /goal actually works in the Codex CLI so your output matches what the agent expects. Then walk through this codebase and pick the three highest-leverage missions /goal could realistically finish in one run. For each one, hand me a complete /goal prompt I can paste straight into the terminal: scope, constraints, files in play, definition of done, and the checks the agent should run before it considers the mission complete.
Three things make this work. The AI is forced to look up /goal before it writes anything, so the prompt isn't guessing at what the command can do. It has to read the actual project before proposing missions, which kills generic suggestions. And you walk away with three options instead of staking everything on one prompt you wrote in two minutes.
Pick the option that matches what you actually want done, copy the detailed prompt, open Codex, type /goal, and paste. That's it. Walk away. Come back in a few hours.
Why Meta Prompting Beats Prompt Engineering
Meta prompting works because writing prompts is itself a skill an LLM can do better than you, especially when you give it the context. You know your project, but the AI knows what kinds of instructions agents respond to, what edge cases they tend to miss, and how to phrase acceptance criteria so they don't get reinterpreted halfway through a run.
This is the same lesson behind a lot of the agent tooling that works right now: the model is better at structuring instructions for itself than you are at structuring instructions for the model. Hand-written prompts are a relic. If you're still manually writing /goal prompts, you're leaving most of the capability on the table.
When to Actually Use /goal
/goal shines on missions with a clear definition of done and a lot of mechanical work between here and there. Migrating a codebase, refactoring a subsystem end-to-end, building a feature that touches a dozen files, writing test suites for legacy code. Anything where the steps are knowable in advance but tedious to execute manually.
It's not the right tool for ambiguous design work or anything that needs your taste in the loop. Don't use /goal to design a UI. Don't use /goal to make architecture calls you haven't made yet. Save it for the "I know what I want, I just don't want to spend eight hours typing it" tasks. For the design and architecture side of working with agents, I covered that in Architecture in the Age of AI Agents.
How /goal Compares to Claude Code and Other Agents
Nothing in Claude Code matches /goal right now. Claude Code is still better at fast, conversational iteration, but it has no equivalent long-horizon mission mode. Droid from Factory has Missions, which is the closest competitor and works well for background tasks, but Codex /goal feels meaningfully more ambitious in what it'll attempt. I covered the broader landscape of these tools in my Claude Code vs Codex vs Cursor comparison and the best Claude Code alternatives roundup.
If your project is set up well for autonomous agents (clear AGENTS.md, good test coverage, an enforceable build), /goal will run circles around any normal prompt-driven workflow. If your project is a mess, it'll hit walls. Worth fixing your project structure for AI agent collaboration before you ask /goal to do anything serious.
The Bottom Line
/goal is the closest thing to "just go build it" that AI coding has produced. The unlock is meta prompting. Stop treating the prompt as a thing you write and start treating it as a thing another model writes for you with the project loaded. That single shift turns /goal from a marginal upgrade into days of compounding output.
Pick a real task and try it tonight. Run codex features enable goals, get a second AI to draft three candidate missions against your repo, pick the one that maps to what you actually want, paste it after /goal, and walk away. When you check back, the diff will feel a little unreasonable. That's the point.
FREQUENTLY ASKED QUESTIONS
What is the /goal feature in Codex?
/goal is a slash command in the Codex CLI that turns Codex into a long-horizon autonomous agent. You give it a high-level mission and it keeps planning, executing, and self-correcting for hours until the mission is done. It shipped in Codex CLI 0.128.0 and is gated behind a feature flag.
How do I enable /goal in Codex?
Run `codex features enable goals` in your terminal, then restart Codex. The /goal command will show up in your slash command list. If you don't see it after that, update to Codex CLI 0.128.0 or later.
What is meta prompting?
Meta prompting is using one AI to write the prompt for another AI. For Codex /goal, you ask an AI with context on your project to research the /goal feature, look at the codebase, and produce a detailed /goal prompt you can paste in. The AI is better at structuring instructions for an agent than you are.
Why are hand-written /goal prompts not good enough?
Hand-written /goal prompts produce results that might as well have come from a normal prompt. Humans underestimate what an autonomous agent needs to know to stay on track without supervision, so the agent fills in the blanks itself and drifts. Meta-prompted /goal prompts are denser, more constrained, and produce dramatically better runs.
What kinds of tasks is Codex /goal good for?
/goal is best for tasks with a clear definition of done and a lot of mechanical work to get there: codebase migrations, end-to-end refactors, multi-file feature builds, and writing test suites for legacy code. It's not the right tool for ambiguous design work or architecture decisions you haven't made yet.
Does Claude Code have an equivalent of /goal?
Not currently. Claude Code is still better at fast conversational iteration, but it has no long-horizon mission mode equivalent to Codex /goal. Droid from Factory has Missions, which is the closest competitor for autonomous background work, but Codex /goal is more ambitious in what it'll attempt in a single run.


