* 🐛 fix: Enhance provider validation and error handling in getProviderConfig function
* WIP: edit text part
* refactor: Allow updating of both TEXT and THINK content types in message updates
* WIP: first pass, save & submit
* chore: remove legacy generation user message field
* feat: merge edited content
* fix: update placeholder and description for bedrock setting
* fix: remove unsupported warning message for AI resubmission
* 🔧 fix: enhance client options handling in AgentClient and set default recursion limit
- Updated the recursion limit to default to 25 if not specified in agentsEConfig.
- Enhanced client options in AgentClient to include model parameters such as apiKey and anthropicApiUrl from agentModelParams.
- Updated requestOptions in the anthropic endpoint to use reverseProxyUrl as anthropicApiUrl.
* Enhance LLM configuration tests with edge case handling
* chore add return type annotation for getCustomEndpointConfig function
* fix: update modelOptions handling to use optional chaining and default to empty object in multiple endpoint initializations
* chore: update @librechat/agents to version 2.4.42
* refactor: streamline agent endpoint configuration and enhance client options handling for title generations
- Introduced a new `getProviderConfig` function to centralize provider configuration logic.
- Updated `AgentClient` to utilize the new provider configuration, improving clarity and maintainability.
- Removed redundant code related to endpoint initialization and model parameter handling.
- Enhanced error logging for missing endpoint configurations.
* fix: add abort handling for image generation and editing in OpenAIImageTools
* ci: enhance getLLMConfig tests to verify fetchOptions and dispatcher properties
* fix: use optional chaining for endpointOption properties in getOptions
* fix: increase title generation timeout from 25s to 45s, pass `endpointOption` to `getOptions`
* fix: update file filtering logic in getToolFilesByIds to ensure text field is properly checked
* fix: add error handling for empty OCR results in uploadMistralOCR and uploadAzureMistralOCR
* fix: enhance error handling in file upload to include 'No OCR result' message
* chore: update error messages in uploadMistralOCR and uploadAzureMistralOCR
* fix: enhance filtering logic in getToolFilesByIds to include context checks for OCR resources to only include files directly attached to agent
---------
Co-authored-by: Matt Burnett <matt.burnett@shopify.com>
* chore: bump vite, vitejs/plugin-react, mark client package as esm, move react-query as a peer dep in data-provider
* chore: import changes due to new data-provider export strategy, also fix type imports where applicable
* chore: export react-query services as separate to avoid react dependencies in /api/
* chore: suppress sourcemap warnings and polyfill node:path which is used by filenamify
TODO: replace filenamify with an alternative and REMOVE polyfill
* chore: /api/ changes to support `librechat-data-provider`
* refactor: rewrite Dockerfile.multi in light of /api/ changes to support `librechat-data-provider`
* chore: remove volume mapping to node_modules directories in default compose file
* chore: remove schemas from /api/ as is no longer needed with use of `librechat-data-provider`
* fix(ci): jest `librechat-data-provider/react-query` module resolution
* feat: update PaLM icons
* feat: add additional google models
* POC: formatting inputs for Vertex AI streaming
* refactor: move endpoints services outside of /routes dir to /services/Endpoints
* refactor: shorten schemas import
* refactor: rename PALM to GOOGLE
* feat: make Google editable endpoint
* feat: reusable Ask and Edit controllers based off Anthropic
* chore: organize imports/logic
* fix(parseConvo): include examples in googleSchema
* fix: google only allows odd number of messages to be sent
* fix: pass proxy to AnthropicClient
* refactor: change `google` altName to `Google`
* refactor: update getModelMaxTokens and related functions to handle maxTokensMap with nested endpoint model key/values
* refactor: google Icon and response sender changes (Codey and Google logo instead of PaLM in all cases)
* feat: google support for maxTokensMap
* feat: google updated endpoints with Ask/Edit controllers, buildOptions, and initializeClient
* feat(GoogleClient): now builds prompt for text models and supports real streaming from Vertex AI through langchain
* chore(GoogleClient): remove comments, left before for reference in git history
* docs: update google instructions (WIP)
* docs(apis_and_tokens.md): add images to google instructions
* docs: remove typo apis_and_tokens.md
* Update apis_and_tokens.md
* feat(Google): use default settings map, fully support context for both text and chat models, fully support examples for chat models
* chore: update more PaLM references to Google
* chore: move playwright out of workflows to avoid failing tests