🧹 chore: Remove Deprecated Gemini 2.0 Models & Fix Mistral-Large-3 Context Window (#12453)

* chore: remove deprecated Gemini 2.0 models from default models list

Remove gemini-2.0-flash-001 and gemini-2.0-flash-lite from the Google
default models array, as they have been deprecated by Google.

Closes #12444

* fix: add mistral-large-3 max context tokens (256k)

Add mistral-large-3 with 255000 max context tokens to the mistralModels
map. Without this entry, the model falls back to the generic
mistral-large key (131k), causing context window errors when using
tools with Azure AI Foundry deployments.

Closes #12429

* test: add mistral-large-3 token resolution tests and fix key ordering

Add test coverage for mistral-large-3 context token resolution,
verifying exact match, suffixed variants, and longest-match precedence
over the generic mistral-large key. Reorder the mistral-large-3 entry
after mistral-large to follow the file's documented convention of
listing newer models last for reverse-scan performance.
This commit is contained in:
Danny Avila 2026-03-28 23:44:58 -04:00 committed by GitHub
parent fda1bfc3cc
commit f82d4300a4
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 55 additions and 3 deletions

View file

@ -72,6 +72,7 @@ const mistralModels = {
'mistral-large-2402': 127500,
'mistral-large-2407': 127500,
'mistral-large': 131000,
'mistral-large-3': 255000,
'mistral-saba': 32000,
'ministral-3b': 131000,
'ministral-8b': 131000,

View file

@ -1238,9 +1238,6 @@ export const defaultModels = {
'gemini-2.5-pro',
'gemini-2.5-flash',
'gemini-2.5-flash-lite',
// Gemini 2.0 Models
'gemini-2.0-flash-001',
'gemini-2.0-flash-lite',
],
[EModelEndpoint.anthropic]: sharedAnthropicModels,
[EModelEndpoint.openAI]: [