mirror of
https://github.com/danny-avila/LibreChat.git
synced 2025-12-24 04:10:15 +01:00
* chore: bump browserslist-db@latest * refactor(EndpointService): simplify with `generateConfig`, utilize optional baseURL for OpenAI-based endpoints, use `isUserProvided` helper fn wherever needed * refactor(custom/initializeClient): use standardized naming for common variables * feat: user provided baseURL for openAI-based endpoints * refactor(custom/initializeClient): re-order operations * fix: knownendpoints enum definition and add FetchTokenConfig, bump data-provider * refactor(custom): use tokenKey dependent on userProvided conditions for caching and fetching endpointTokenConfig, anticipate token rates from custom config * refactor(custom): assure endpointTokenConfig is only accessed from cache if qualifies for fetching * fix(ci): update tests for initializeClient based on userProvideURL changes * fix(EndpointService): correct baseURL env var for assistants: `ASSISTANTS_BASE_URL` * fix: unnecessary run cancellation on res.close() when response.run is completed * feat(assistants): user provided URL option * ci: update tests and add test for `assistants` endpoint * chore: leaner condition for request closing * chore: more descriptive error message to provide keys again |
||
|---|---|---|
| .. | ||
| EndpointService.js | ||
| getCustomConfig.js | ||
| handleRateLimits.js | ||
| index.js | ||
| loadAsyncEndpoints.js | ||
| loadConfigEndpoints.js | ||
| loadConfigModels.js | ||
| loadCustomConfig.js | ||
| loadDefaultEConfig.js | ||
| loadDefaultModels.js | ||
| loadOverrideConfig.js | ||