XAI Router Codex Monthly Service: Codex / OpenClaw / OpenCode
Posted February 14, 2026 by XAI Technical TeamΒ βΒ 3Β min read

XAI Router now keeps the Codex story on one clean rule: Codex CLI / App stay on native Responses; OpenClaw / OpenCode / Chat / Claude and other clients use their matching compatibility layers.
What Is Supported
- Codex CLI / App: native Responses path through
https://api.xairouter.com. - OpenClaw: standard Responses-compatible path through a custom
openai-responsesprovider. - OpenCode:
gpt-5.4keeps its Responses request shape whenever possible through a targeted compatibility path. - Compatibility APIs:
/v1/chat/completionsand/v1/messagesstill work, but they are bridge layers rather than the native Codex entry path.
Short version: Codex CLI / App should use /v1/responses; other clients should use their recommended integration path.
Recommended Codex CLI / App Config
This is a recommended ~/.codex/config.toml example:
model_provider = "xai"
model = "gpt-5.4"
model_reasoning_effort = "xhigh"
plan_mode_reasoning_effort = "xhigh"
model_reasoning_summary = "none"
model_verbosity = "medium"
approval_policy = "never"
sandbox_mode = "danger-full-access"
[model_providers.xai]
name = "OpenAI"
base_url = "https://api.xairouter.com"
wire_api = "responses"
requires_openai_auth = false
env_key = "XAI_API_KEY"Once this is in place, both Codex CLI and Codex App can point directly at https://api.xairouter.com. Here env_key = "XAI_API_KEY" only tells Codex which environment variable to read; on Linux use ~/.bashrc, on macOS prefer ~/.zshrc, and on Windows use a user environment variable before reopening the shell. On some older macOS setups, legacy terminals, or IDE sessions that still inherit a bash login environment, also mirror the variable into ~/.bash_profile, and into ~/.bashrc if needed.
Minimal API Examples
Responses API (Recommended)
curl https://api.xairouter.com/v1/responses -H "Authorization: Bearer ${XAI_API_KEY}" -H "Content-Type: application/json" -d '{
"model":"gpt-5.4",
"stream": false,
"input":[
{"type":"message","role":"user","content":[{"type":"input_text","text":"Summarize today's task priority in one sentence."}]}
]
}'Chat Completions (Compatibility Bridge)
curl https://api.xairouter.com/v1/chat/completions -H "Authorization: Bearer ${XAI_API_KEY}" -H "Content-Type: application/json" -d '{
"model":"gpt-5.4",
"stream": false,
"messages":[{"role":"user","content":"Give me one short weather suggestion for Shanghai today."}]
}'Claude API (Compatibility Bridge)
curl https://api.xairouter.com/v1/messages -H "x-api-key: ${XAI_API_KEY}" -H "anthropic-version: 2023-06-01" -H "Content-Type: application/json" -d '{
"model":"gpt-5.4",
"max_tokens": 256,
"messages":[{"role":"user","content":"Summarize today's task focus in one sentence."}]
}'Compatibility APIs are useful for existing integrations, but if you want the closest path to native Codex behavior, Responses remains the baseline.
Recommended OpenClaw Setup
If your goal is the simplest, most stable Responses-compatible experience while routing OpenClaw through XAI Router, use this config:
{
"agents": {
"defaults": {
"model": { "primary": "xairouter/gpt-5.4" },
"models": {
"xairouter/gpt-5.4": {
"alias": "Codex",
"params": { "transport": "sse" }
}
}
}
},
"models": {
"mode": "replace",
"providers": {
"xairouter": {
"baseUrl": "https://api.xairouter.com/v1",
"apiKey": "${XAI_API_KEY}",
"api": "openai-responses",
"models": [
{ "id": "gpt-5.4", "name": "GPT-5.4" }
]
}
}
}
}Key points:
api: "openai-responses"is the OpenClaw-documented custom-provider structure.baseUrlshould behttps://api.xairouter.com/v1, and you do not need to addheaders.originator; the leaner provider config is the better default.params.transport = "sse"keeps the upstream on HTTP/v1/responses; switch it to"auto"if you want WebSocket-first behavior. To avoid agent-localmodels.jsonoverrides, this guide usesmodels.mode = "replace"for a more deterministic result.- OpenClaw's built-in
openai-codexOAuth path goes directly to OpenAI and does not route through XAI Router.
If the config appears ignored, also check whether ~/.openclaw/agents/<agentId>/models.json is overriding it.
Platform-specific guides: /en/blog/openclaw/, /en/blog/openclaw-macos/, /en/blog/openclaw-windows-xai-router/.
OpenCode
For OpenCode, see: /en/blog/opencode-xai-router/. The current gpt-5.4 setup already keeps OpenCode on its Responses request shape whenever possible.
Conclusion
The current line is straightforward:
- Codex CLI / App: prefer native Responses
- OpenClaw / OpenCode: prefer their recommended Responses integrations
- Chat Completions / Claude-compatible APIs: still available, but positioned as bridge layers
- Codex
config.tomlin docs: unified to the current real localgpt-5.4config