Description
deepseek-v4-flash (DeepSeek V4) supports 1M token context window, but fallback_context_limit_for_model() in
src/provider/models.rs has no hardcoded fallback for deepseek models.
Deepseek models currently fall through to get_cached_context_limit() or DEFAULT_CONTEXT_LIMIT = 200_000. Users
connecting directly to DeepSeek API get capped at 200k.
Suggested Fix
In src/provider/models.rs, add before the final else:
if model.starts_with("deepseek") {
return Some(1_000_000);
}
Environment
- jcode v0.11.1 (1f622e6b)
- Provider: deepseek (direct, not OpenRouter)
- Model: deepseek-v4-flash
- Actual model context: 1,000,000 tokens
Description
deepseek-v4-flash(DeepSeek V4) supports 1M token context window, butfallback_context_limit_for_model()insrc/provider/models.rshas no hardcoded fallback for deepseek models.Deepseek models currently fall through to
get_cached_context_limit()orDEFAULT_CONTEXT_LIMIT = 200_000. Usersconnecting directly to DeepSeek API get capped at 200k.
Suggested Fix
In
src/provider/models.rs, add before the finalelse: