Enable GPT-5.5 in Codex: Temporary Switching and Default Configuration via XAI Router
Posted April 24, 2026 by XAI Tech TeamΒ βΒ 4Β min read

Codex CLI makes model selection flexible: you can choose a model for one session, or write your preferred model into ~/.codex/config.toml as the default.
If you want to use gpt-5.5 through XAI Router, the recommended setup is straightforward: route Codex requests to https://api.xairouter.com, set the default model to gpt-5.5, and, for Codex versions that do not yet include built-in gpt-5.5 metadata, add a local model-catalog.json so Codex treats it as a known model instead of falling back to conservative defaults.
When to Use This
This guide is for you if:
- You already have Codex CLI installed.
- You already have a valid
XAI_API_KEY. - You want to test or default to
gpt-5.5in Codex. - You want Codex to recognize
gpt-5.5locally instead of using unknown-model fallback metadata.
First, set your API key:
export XAI_API_KEY="your XAI API Key"To make it persistent, add it to ~/.bashrc, ~/.zshrc, or your preferred environment variable manager.
Option 1: Use GPT-5.5 for One Session
If you only want to test gpt-5.5 temporarily, you do not need to edit the config file. Start Codex with:
codex -m gpt-5.5You can also verify it in non-interactive mode:
codex exec -m gpt-5.5 "Describe the active model configuration in one sentence"This is the safest way to try it first. It does not modify ~/.codex/config.toml, so the next launch will still use your previous default model.
Option 2: Set GPT-5.5 as the Default Model
Edit ~/.codex/config.toml:
model_provider = "xai"
model = "gpt-5.5"
model_reasoning_effort = "xhigh"
plan_mode_reasoning_effort = "xhigh"
model_reasoning_summary = "none"
model_context_window = 1050000
model_auto_compact_token_limit = 945000
tool_output_token_limit = 6000
approval_policy = "never"
sandbox_mode = "danger-full-access"
suppress_unstable_features_warning = true
model_catalog_json = "model-catalog.json"
[model_providers.xai]
name = "OpenAI"
base_url = "https://api.xairouter.com"
wire_api = "responses"
requires_openai_auth = false
env_key = "XAI_API_KEY"
[features]
multi_agent = true
remote_connections = true
[agents]
max_threads = 4
max_depth = 1
job_max_runtime_seconds = 1800This configuration does three things:
model_provider = "xai"tells Codex to use the XAI Router provider defined below.model = "gpt-5.5"makesgpt-5.5the default model.base_url = "https://api.xairouter.com"sends OpenAI Responses API traffic to XAI Router.
model_catalog_json = "model-catalog.json" is optional but recommended. It adds local model metadata so Codex versions that do not yet bundle gpt-5.5 can still treat it as a known model.
Make Codex Recognize GPT-5.5 Locally
Some Codex CLI versions may not include gpt-5.5 in the bundled model catalog yet. In that case, even if Codex sends requests with model = "gpt-5.5", the local client may use fallback metadata, which can show a smaller effective context window.
The recommended path is to download the prepared model catalog:
mkdir -p ~/.codex
curl -L "https://filelist.cn/disk/0/model-catalog.json" -o ~/.codex/model-catalog.jsonIf you do not have curl, open the URL below in your browser and save the file as ~/.codex/model-catalog.json:
https://filelist.cn/disk/0/model-catalog.jsonIf you prefer to generate the catalog yourself, you can use this script. It reads Codex's bundled model catalog, copies the gpt-5.4 capability metadata, and adds a gpt-5.5 entry.
mkdir -p ~/.codex
python3 - <<'PY'
import json
import subprocess
from pathlib import Path
codex_home = Path.home() / ".codex"
catalog_path = codex_home / "model-catalog.json"
raw = subprocess.check_output(["codex", "debug", "models", "--bundled"], text=True)
catalog = json.loads(raw)
models = [model for model in catalog["models"] if model.get("slug") != "gpt-5.5"]
base = next(model for model in models if model.get("slug") == "gpt-5.4")
gpt55 = dict(base)
gpt55["slug"] = "gpt-5.5"
gpt55["display_name"] = "gpt-5.5"
gpt55["description"] = "Custom local metadata for gpt-5.5 via XAI Router."
gpt55["context_window"] = 1050000
gpt55["max_context_window"] = 1050000
catalog["models"] = [gpt55] + models
catalog_path.write_text(json.dumps(catalog, indent=2, ensure_ascii=False) + "\n")
print(catalog_path)
PYThen make sure ~/.codex/config.toml contains:
model_catalog_json = "model-catalog.json"If your Codex version requires an absolute path, use:
model_catalog_json = "/home/your-user/.codex/model-catalog.json"Restart Codex after changing this file. model_catalog_json is loaded at startup, so existing sessions will not update automatically.
Update the Model Mapping in XAI
After configuring Codex locally, also confirm that your XAI account's model mapping supports gpt-5.5. If your account still uses an older mapping, a request for gpt-5.5 may still be routed back to gpt-5.4 by the wildcard rule.
An older default mapping usually looks like this:
*=gpt-5.4,
gpt-5.4-nano=gpt-5.4-mini,
gpt-5.3-codex-spark*=gpt-5.3-codex-spark,
gpt-5.3-codex*=gpt-5.3-codex,
gpt-*-mini*=gpt-5.4-mini,
claude-haiku*=gpt-5.4-miniThe newer mapping that supports gpt-5.5 adds an explicit rule:
*=gpt-5.4,
gpt-5.5*=gpt-5.5,
gpt-5.4-nano=gpt-5.4-mini,
gpt-5.3-codex-spark*=gpt-5.3-codex-spark,
gpt-5.3-codex*=gpt-5.3-codex,
gpt-*-mini*=gpt-5.4-mini,
claude-haiku*=gpt-5.4-miniIf you want traffic to actually route to gpt-5.5, renew or update your plan in the XAI system so your account receives the latest mapping that includes gpt-5.5*=gpt-5.5.
We do not automatically rewrite mappings for existing users. Model mapping is an account-level routing policy, and changing it without user action could affect production behavior, model cost, or compatibility with existing workloads.
Verify That It Works
The simplest check is to start a new Codex session:
codex -m gpt-5.5Or run a quick non-interactive check:
codex exec -m gpt-5.5 "Print exactly one line: GPT-5.5 OK"If you also configured model-catalog.json, the new session should no longer treat gpt-5.5 as an unknown model fallback. With the sample context window settings above, Codex's local effective context window will be close to:
1050000 * 95% = 997500This number reflects Codex's local model metadata and truncation strategy. It is not a guarantee that the backend will accept that much context in every request. The actual usable limit still depends on the backend model and routing policy.
Recommended Setup
For daily use, we recommend this combination:
- Use
XAI_API_KEYfor credentials. - Use
model_provider = "xai"to route Codex through XAI Router. - Use
model = "gpt-5.5"if you want GPT-5.5 as the default. - Use
model_catalog_jsonto add local model metadata. - Test first with
codex -m gpt-5.5, then make it the default once it is stable for your account.
With this setup, Codex sends OpenAI Responses API requests through XAI Router and runs locally with custom gpt-5.5 model metadata. If you switch models often, you can keep your default unchanged and use -m gpt-5.5 only for selected tasks.