OpenAI's Codex CLI has evolved from the lightweight terminal agent launched in April 2025 into the central hub of an entire coding platform — spanning terminal, IDE extension, cloud environment, and even mobile. The model lineup has grown from the original codex-1 and o4-mini to a family that now includes GPT-5.3-Codex and the ultra-fast GPT-5.3-Codex-Spark on Cerebras hardware, and capabilities like MCP server integration, agent skills, mid-turn steering, web search, and code review have transformed what was once a clever terminal helper into a serious contender for enterprise development workflows.
This guide covers installing and configuring Codex CLI on Ubuntu Linux in its current state: the models available, the authentication paths, the security controls that matter for production environments, and the MCP and skills integrations that make Codex CLI worth deploying beyond basic code generation.
✓ Key Takeaways
- Installation remains npm-based —
npm i -g @openai/codexor direct binary download from GitHub Releases. Homebrew is also supported viabrew install --cask codex. - GPT-5.3-Codex is the current flagship model — 25% faster than GPT-5.2-Codex with stronger reasoning and mid-turn steering, available through ChatGPT Plus, Pro, Business, Edu, and Enterprise subscriptions. GPT-5.3-Codex-Spark adds ultra-low-latency inference via Cerebras for Pro users.
- ChatGPT account authentication is the recommended path — API key auth remains available for CI/CD and headless environments, but subscription-based auth provides the simplest onboarding.
- MCP server integration connects Codex to external tools — from documentation servers (Context7) to browser automation (Playwright) to databases, configured through a shared
config.tomlthat syncs between CLI and IDE extension. - Linux sandbox (bubblewrap) now provides configurable isolation — with read-only access policies, shell environment controls, and approval modes that restrict what Codex can touch on your system.
What Codex CLI Does in 2026
Codex CLI is an open-source, Rust-based coding agent that runs locally in your terminal and connects to OpenAI's cloud models for AI inference. Your code stays on your machine and executes locally. The AI reasoning that interprets your prompts and generates solutions happens in OpenAI's cloud infrastructure. This hybrid architecture lets Codex leverage models far more powerful than could run on a development workstation while maintaining developer control over what actually runs in their environment.
The product has expanded well beyond the original terminal-only tool. Codex now spans four surfaces — all connected through your ChatGPT account so context moves seamlessly between them:
- Codex CLI: The open-source terminal agent. Reads, edits, and runs code in your local environment with configurable approval modes and sandbox isolation.
- Codex IDE Extension: A VS Code extension that brings the same agentic capabilities into your editor, sharing MCP and configuration with the CLI.
- Codex Cloud: An asynchronous web-based environment (chatgpt.com/codex) where tasks run independently in isolated containers with your codebase preloaded.
- Codex App: A dedicated desktop application (Windows alpha testing underway) with full model selection, skills, and MCP support.
Current capabilities include natural language code generation and editing across multiple files, codebase understanding and explanation, multimodal input (screenshots, wireframes, diagrams), Git-aware operations, MCP server integration for external tools, agent skills for repeatable workflows, mid-turn steering (redirect the agent while it's working), built-in web search (cached or live), automated code review, and thread resume for continuing sessions across restarts [OpenAI].
The Model Lineup
Understanding which models are available matters for both capability planning and cost management. Codex CLI supports multiple models that can be switched mid-session using the /model command:
| Model | Best For | Notes |
|---|---|---|
| GPT-5.3-Codex | Complex engineering, long-running multi-day tasks | Current flagship (Feb 2026). 25% faster than 5.2. State-of-the-art on SWE-Bench Pro and Terminal-Bench 2.0. First model treated as High security capability under Preparedness Framework. Supports mid-turn steering and real-time progress updates. |
| GPT-5.3-Codex-Spark | Real-time interactive coding, quick edits | Smaller 5.3 variant optimized for ultra-low latency on Cerebras hardware. 1,000+ tokens/sec. Research preview for Pro users. 128k context, text-only. Performance between 5.3-Codex and 5.1-Codex-Mini. |
| GPT-5.2-Codex | Long-running tasks, large repositories | Previous flagship. Strong cybersecurity capabilities. Native compaction for extended sessions. Currently the latest model available via API key. |
| GPT-5.1-Codex-Max | Project-scale, multi-context-window work | Frontier agentic model for long-running tasks. Compaction across multiple context windows. |
| GPT-5-Codex-Mini | Quick edits, code Q&A, cost-sensitive workflows | 4x more usage within subscription limits. Auto-offered at 90% usage cap. |
For most development work, GPT-5.3-Codex is the default and recommended model. Codex-Spark offers a near-instant experience for real-time interactive editing (currently a research preview for Pro users), while GPT-5-Codex-Mini provides a cost-effective option for simpler tasks or when approaching usage limits [OpenAI].
Prerequisites for Ubuntu Installation
- Ubuntu 20.04+ (22.04 LTS or 24.04 LTS recommended for enterprise deployments)
- Node.js 18+ for npm installation method (or download binary directly from GitHub Releases for zero-dependency install)
- Git 2.23+ (recommended for repository-aware operations)
- Stable internet connection to api.openai.com for all AI inference
- A ChatGPT subscription — Plus ($20/mo), Pro ($200/mo), Business, Edu, or Enterprise
- 2GB+ RAM minimum, 4GB recommended — Codex CLI itself is lightweight since inference runs in the cloud
Note: Network Requirements
Codex CLI requires outbound HTTPS access to api.openai.com for all AI operations. Organizations with restrictive firewall policies must whitelist these endpoints. If web search is enabled (default: cached mode), additional outbound access may be needed depending on configuration.
Step-by-Step Installation
Step 1: Update System Packages
sudo apt update && sudo apt upgrade -yStep 2: Install Node.js via NodeSource
Ubuntu's default repositories often contain outdated Node.js versions. Install from NodeSource for a current release:
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs
# Verify
node --version # Should show v20.x.x
npm --version # Should show 10.x.x or higherStep 3: Install Codex CLI
Install globally via npm:
npm i -g @openai/codexAlternatively, download a pre-compiled binary directly from the GitHub Releases page. Each release includes platform-specific archives — for Ubuntu, download the codex-x86_64-unknown-linux-musl binary, rename it to codex, and place it in your PATH.
Homebrew is also supported on Linux:
brew install --cask codexImportant: Never Use sudo With npm Install
If you encounter EACCES permission errors, configure a user-level npm directory instead of using sudo: mkdir -p ~/.npm-global && npm config set prefix ~/.npm-global, then add export PATH="~/.npm-global/bin:$PATH" to your ~/.bashrc.
Step 4: Verify Installation
codex --versionStep 5: Authenticate
Launch Codex CLI and complete first-run authentication:
cd your-project-directory
codexChatGPT Account Authentication (Recommended): Select "Sign in with ChatGPT." Codex launches a temporary local OAuth server and opens your browser for authentication. After completing the flow, return to your terminal — authentication tokens are stored automatically. This method provides the simplest onboarding and access to all Codex models through your existing subscription.
API Key Authentication: For CI/CD pipelines, headless servers, or environments without browser access, configure an API key as an environment variable:
# Set for current session
export OPENAI_API_KEY="sk-your-api-key-here"
# Persist across sessions
echo 'export OPENAI_API_KEY="sk-your-api-key-here"' >> ~/.bashrc
source ~/.bashrcObtain API keys from platform.openai.com/api-keys. Note that GPT-5.3-Codex and GPT-5.3-Codex-Spark are not yet available via API — API-key workflows should continue using gpt-5.2-codex until API support rolls out. Never commit API keys to version control or store them in plain text within project repositories.
Understanding Approval Modes and Sandbox
Codex CLI provides granular control over what the AI agent can do on your system. These aren't just preferences — for enterprise environments, they're security controls that determine the agent's blast radius.
Codex CLI Approval Modes
Read Only
Codex reads and analyzes code but cannot modify files or execute commands. Use for exploration and planning.
Auto (Default)
Full workspace access for reads, edits, and commands. Requires approval for operations outside the workspace or network access.
Full Access
Unrestricted autonomy including file system access anywhere, command execution, and network operations. Use only in isolated environments.
Switch modes dynamically with /approvals read-only, /approvals auto, or /approvals full-access. On Linux, Codex now uses bubblewrap for sandbox isolation — providing configurable read-only access policies, shell environment controls, and restrictions on what the agent can reach. Configure these in ~/.codex/config.toml:
# Control which env vars Codex forwards to spawned commands
[shell_environment_policy]
include_only = ["PATH", "HOME"]
# Web search configuration
web_search = "cached" # "cached" (default), "live", or "disabled"Essential Commands and Configuration
| Command | Purpose |
|---|---|
| codex | Start an interactive TUI session in the current directory |
| codex "prompt" | Start a session with an initial prompt |
| codex exec "task" | Non-interactive execution for automation and CI/CD |
| /model | Switch between GPT-5.3-Codex, Codex-Spark, GPT-5.2-Codex, Mini, etc. |
| /approvals | Switch approval mode (read-only, auto, full-access) |
| /mcp | View active MCP server status and tools |
| /skills | View and invoke agent skills |
| /statusline | Configure which metadata appears in the TUI footer |
| /debug-config | Debug configuration with source provenance for each value |
| /m_update, /m_drop | Manage Codex's memory system (update or drop memories) |
| codex mcp add | Add an MCP server (stdio or HTTP) |
| codex features list | View available feature flags (unified_exec, shell_snapshot, etc.) |
| npm update -g @openai/codex | Update to the latest version |
Codex also supports steer mode — press Enter while Codex is running to inject new instructions into the current turn, or press Tab to queue a follow-up prompt for the next turn. This enables real-time course correction without canceling and restarting. The @ key opens fuzzy file search in the composer, and prefixing a line with ! runs a local shell command directly.
Setting Up MCP Servers
The Model Context Protocol connects Codex to external tools, documentation, and services. MCP configuration lives in ~/.codex/config.toml and is shared between the CLI and IDE extension — configure once, use everywhere.
Adding Context7 for Real-Time Documentation
# Add Context7 for up-to-date library documentation
codex mcp add context7 -- npx -y @upstash/context7-mcp
# Add Playwright for browser automation
codex mcp add playwright -- npx @playwright/mcp@latest
# View all configured MCP servers
codex mcp --helpInside a Codex session, use /mcp to view active servers and their available tools. MCP servers can also be configured directly in config.toml for more control:
[mcp_servers.context7]
command = "npx"
args = ["-y", "@upstash/context7-mcp"]
[mcp_servers.openaiDeveloperDocs]
type = "streamable_http"
url = "https://developers.openai.com/mcp"Codex also supports Streamable HTTP servers (remote MCP servers accessed via URL) and OAuth authentication for MCP servers that require it. If your OAuth provider requires a static callback URI, set mcp_oauth_callback_port in config.toml.
For organizations deploying AI development tools across teams, MCP server configurations can be scoped to projects via .codex/config.toml in the project root (trusted projects only), ensuring all team members access the same integrations.
Agent Skills: Repeatable Workflows
Skills package instructions, resources, and optional scripts so Codex can follow complex workflows reliably. They use progressive disclosure — Codex starts with each skill's metadata and loads full instructions only when needed.
A skill is a directory with a SKILL.md file containing a name, description, and instructions. Skills can be stored at the repository level (.agents/skills/), user level, or system level. Codex detects skill changes automatically.
# View available skills
/skills
# Invoke a skill explicitly with $
> $my-skill Fix the authentication flow
# Skills can also be implicitly invoked when your prompt matches the skill descriptionSkills can declare MCP server dependencies, so enabling a skill automatically ensures the required tools are available. For teams, sharing skills through version control creates consistent, auditable workflows that don't depend on individual developer configurations.
Enterprise Security Considerations
Deploying an AI coding agent that sends code context to cloud services for inference requires security planning that goes beyond traditional development tools. Codex CLI's architecture — local execution with cloud-based reasoning — creates specific requirements for organizations managing sensitive code.
- Network controls: Whitelist api.openai.com in firewall configurations. If web search is enabled, additional outbound access may be needed. Monitor outbound API traffic for anomalous patterns.
- Data privacy: Review OpenAI's data usage policies for your subscription tier. Business and Enterprise plans provide enhanced data privacy commitments. Restrict Codex to non-sensitive codebases initially and implement code classification policies for AI tool usage.
- API key management: Implement rotation policies (90-day maximum). Use separate keys for different environments. Store keys in environment variables or secrets management services — never in version control.
- Sandbox configuration: Use the Linux bubblewrap sandbox and shell_environment_policy to restrict what Codex can access. Configure read-only access for exploratory work. Use auto mode for routine development and reserve full-access for isolated test environments only.
- Code review requirements: Require human review for all AI-generated commits. Codex includes a built-in code review feature that can catch issues before shipping, but it should supplement rather than replace human review processes.
- Compliance evaluation: Assess Codex usage against organizational requirements including HIPAA, CMMC, and SOC 2 frameworks. Document AI tool usage in security policies and maintain audit logs of Codex operations.
- Enterprise admin controls: Organizations can use
requirements.tomlto restrict web search modes, define network constraints, and enforce approval policies across the team.
Troubleshooting Common Issues
▶ "Command not found" after installation
Your shell hasn't picked up the npm global bin path. Fix with:
npm config get prefix
echo 'export PATH="$(npm config get prefix)/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc▶ EACCES permission errors during npm install
Configure npm to use a user-owned directory rather than using sudo:
mkdir -p ~/.npm-global
npm config set prefix '~/.npm-global'
echo 'export PATH="~/.npm-global/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
npm i -g @openai/codex▶ Authentication browser issues on headless servers
Copy the authentication URL displayed in the terminal and open it on a device with a browser. Alternatively, use API key authentication for headless environments where browser-based OAuth isn't feasible.
▶ MCP servers not connecting
Required MCP servers now fail fast during session start rather than continuing in a broken state. Check your configuration:
- Verify config with
/mcpinside a Codex session - Ensure required dependencies (Node.js for npx-based servers) are installed
- Check
~/.codex/config.tomlfor syntax errors - For OAuth-based MCP servers, run
codex mcp login server-name
Sources
- OpenAI. "Codex CLI." Developer Documentation, 2026. developers.openai.com/codex/cli
- OpenAI. "Model Context Protocol." Codex Documentation, 2026. developers.openai.com/codex/mcp
- OpenAI. "Introducing GPT-5.2-Codex." OpenAI Blog, 2026. openai.com/index/introducing-gpt-5-2-codex
- OpenAI. "Introducing upgrades to Codex." OpenAI Blog, 2025. openai.com/index/introducing-upgrades-to-codex
- OpenAI. "Codex Changelog." Developer Documentation, 2026. developers.openai.com/codex/changelog
- OpenAI. "openai/codex." GitHub Repository, 2026. github.com/openai/codex
Related Resources
Strategic guidance for AI development tool evaluation, deployment planning, and governance frameworks.
Security framework development for organizations deploying AI coding agents with cloud-based inference.
Companion guide covering Anthropic's Claude Code CLI with MCP server configuration and enterprise deployment.
Model comparison covering the AI engines powering Codex CLI and Google Antigravity.
Ready to Deploy AI Coding Tools Across Your Development Team?
Codex CLI's combination of powerful cloud-based models, configurable sandbox isolation, and MCP server integration makes it a compelling option for organizations looking to accelerate development velocity. ITECS provides the cybersecurity expertise and AI consulting capabilities needed to deploy these tools with the governance controls that enterprise environments demand.
