Using LiteLLM with OpenAI Codex
https://docs.litellm.ai/docs/tutorials/openai_codex
#Codex_CLI
#LiteLLM
2. Start LiteLLM Proxy
docker or CLI
3. Configure LiteLLM for Model Routing
export OPENAI_BASE_URL=http://0.0.0.0:4000
(4)
Codex CLIをLiteLLMに向けている
5. Run Codex with Gemini
codex --model gemini-2.0-flash --full-auto
/responses (Supported Endpoints)