Replies: 7 comments 3 replies
-
|
Your org inference endpoint only supports models that show up in the Models Catalog for your org. Check GET /catalog/models and use the exact model id listed there; if openai/gpt-5 isn’t in the catalog yet, use something that is (e.g. openai/gpt-4o) until it’s rolled out. Docs link (catalog endpoint): |
Beta Was this translation helpful? Give feedback.
-
|
This error happens because the inference endpoint you are calling does not support the gpt-5 model, even if it’s enabled in your org. GitHub’s models availability can differ per endpoint and per action version. Try using a model that is supported by that endpoint (for example gpt-4o) and confirm available models for your endpoint. You can list models from the inference endpoint here: https://docs.github.com/en/rest/actions/ai-inference#list-models Also make sure your workflow uses the latest version of actions/ai-inference which supports newer models. |
Beta Was this translation helpful? Give feedback.
-
|
Error Explanation: Reasons:
How to fix or troubleshoot: Step 1: List available models from your endpoint Step 2: Use a supported model from the list, for example Step 3: Update your GitHub Actions workflow Step 4: Check GitHub documentation and changelog for when GPT-5 becomes fully available Summary: |
Beta Was this translation helpful? Give feedback.
-
What’s really happeningEven though You’re calling this endpoint: That endpoint is GitHub Models–scoped, not OpenAI-global. So the error: means:
Why
|
Beta Was this translation helpful? Give feedback.
-
|
If you see the model enabled in your org settings but the action keeps spitting out a 400 error, it usually feels like you're being ghosted by the API. The main thing to keep in mind is that the model name in the GitHub UI doesn't always match the exact string the inference endpoint expects. Even if your org has access to a specific model, the endpoint might be looking for a very specific version name. Here is what is likely going on and how to poke it to see what's actually available: First, try changing the model string. Usually, GitHub's inference service prefers specific versions over the generic name. Instead of openai/gpt-5, try using gpt-5 or whatever the specific version string is listed in the GitHub Models marketplace. Since gpt-5 is still very new or in limited release depending on your tier, it might actually be mapped under a different identifier like gpt-5-preview or something similar. Second, you can actually ask the endpoint what it has in stock. If you have a PAT (Personal Access Token) with the right permissions, you can run a quick curl command against the models list endpoint: curl -H "Authorization: Bearer YOUR_TOKEN" https://models.github.ai/inference/models
This will give you a JSON list of every single model ID that your specific token and org can actually call. If gpt-5 isn't in that list, then the "enabled" status in the org settings hasn't synced to the inference runtime yet. Third, double check your permissions in the action. Ensure the token being used by If gpt-4o works but gpt-5 doesn't with the exact same setup, it’s almost definitely down to the specific model ID string or a rollout delay on GitHub's backend. Give that curl command a shot to see the "true" list of available IDs! Let me know what it returns and we can narrow it down from there. |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
|
For model availability at the AI inference endpoint, the most reliable source is the GitHub Models API documentation and the Model Catalog. You can programmatically list the available models using the following GET request: (Note: You'll need a valid GitHub token with the appropriate permissions). |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Select Topic Area
Question
Body
I am using this is action
uses: actions/ai-inference@v2.0.5
with:
model: openai/gpt-5
endpoint: https://models.github.ai/orgs/${{ github.repository_owner }}/inference
i would get an error saying "simpleInference: chatCompletion failed: Error: 400 Unavailable model: gpt-5" . I see the model gpt-5 as enable in my org. In this endpoint, gpt-4o is available. Do anyone have an idea why is this the cases?
Beta Was this translation helpful? Give feedback.
All reactions