Convert ChatJimmy into an OpenAI-compatible API. Deploy to Vercel Edge in one click.
- Fork this repository
- Click the button below to deploy your fork to Vercel
- OpenAI-compatible API —
GET /v1/modelsandPOST /v1/chat/completions - Streaming & non-streaming — full SSE streaming support
- Bearer token auth — optional API key protection via
API_KEYenv var - Vercel Edge — runs on Vercel Edge Runtime with Hono
- Zero config — auto-detected by Vercel CLI, ready to deploy
Set your deployed URL as the API base and use it like OpenAI:
curl https://your-deploy.vercel.app/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"model": "llama3.1-8B",
"messages": [{"role": "user", "content": "Hello"}],
"stream": true
}'Works with any OpenAI-compatible client:
from openai import OpenAI
client = OpenAI(
base_url="https://your-deploy.vercel.app/v1",
api_key="your-api-key",
)
response = client.chat.completions.create(
model="llama3.1-8B",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)| Method | Path | Description |
|---|---|---|
GET |
/v1/models |
List available models |
POST |
/v1/chat/completions |
Chat completions (stream / non-stream) |
| Variable | Required | Description |
|---|---|---|
API_KEY |
No | Bearer token for authentication. If not set, no auth is required. |