📎 feat: Direct Provider Attachment Support for Multimodal Content (#9994)
Some checks failed
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile, librechat-dev, node) (push) Has been cancelled
Docker Dev Images Build / build (Dockerfile.multi, librechat-dev-api, api-build) (push) Has been cancelled
Sync Locize Translations & Create Translation PR / Sync Translation Keys with Locize (push) Has been cancelled
Sync Locize Translations & Create Translation PR / Create Translation PR on Version Published (push) Has been cancelled

* 📎 feat: Direct Provider Attachment Support for Multimodal Content

* 📑 feat: Anthropic Direct Provider Upload (#9072)

* feat: implement Anthropic native PDF support with document preservation

- Add comprehensive debug logging throughout PDF processing pipeline
- Refactor attachment processing to separate image and document handling
- Create distinct addImageURLs(), addDocuments(), and processAttachments() methods
- Fix critical bugs in stream handling and parameter passing
- Add streamToBuffer utility for proper stream-to-buffer conversion
- Remove api/agents submodule from repository

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: remove out of scope formatting changes

* fix: stop duplication of file in chat on end of response stream

* chore: bring back file search and ocr options

* chore: localize upload to provider string in file menu

* refactor: change createMenuItems args to fit new pattern introduced by anthropic-native-pdf-support

* feat: add cache point for pdfs processed by anthropic endpoint since they are unlikely to change and should benefit from caching

* feat: combine Upload Image into Upload to Provider since they both perform direct upload and change provider upload icon to reflect multimodal upload

* feat: add citations support according to docs

* refactor: remove redundant 'document' check since documents are handled properly by formatMessage in the agents repo now

* refactor: change upload logic so anthropic endpoint isn't exempted from normal upload path using Agents for consistency with the rest of the upload logic

* fix: include width and height in return from uploadLocalFile so images are correctly identified when going through an AgentUpload in addImageURLs

* chore: remove client specific handling since the direct provider stuff is handled by the agent client

* feat: handle documents in AgentClient so no need for change to agents repo

* chore: removed unused changes

* chore: remove auto generated comments from OG commit

* feat: add logic for agents to use direct to provider uploads if supported (currently just anthropic)

* fix: reintroduce role check to fix render error because of undefined value for Content Part

* fix: actually fix render bug by using proper isCreatedByUser check and making sure our mutation of formattedMessage.content is consistent

---------

Co-authored-by: Andres Restrepo <andres@thelinuxkid.com>
Co-authored-by: Claude <noreply@anthropic.com>

📁 feat: Send Attachments Directly to Provider (OpenAI) (#9098)

* refactor: change references from direct upload to direct attach to better reflect functionality

since we are just using base64 encoding strategy now rather than Files/File API for sending our attachments directly to the provider, the upload nomenclature no longer makes sense. direct_attach better describes the different methods of sending attachments to providers anyways even if we later introduce direct upload support

* feat: add upload to provider option for openai (and agent) ui

* chore: move anthropic pdf validator over to packages/api

* feat: simple pdf validation according to openai docs

* feat: add provider agnostic validatePdf logic to start handling multiple endpoints

* feat: add handling for openai specific documentPart formatting

* refactor: move require statement to proper place at top of file

* chore: add in openAI endpoint for the rest of the document handling logic

* feat: add direct attach support for azureOpenAI endpoint and agents

* feat: add pdf validation for azureOpenAI endpoint

* refactor: unify all the endpoint checks with isDocumentSupportedEndpoint

* refactor: consolidate Upload to Provider vs Upload image logic for clarity

* refactor: remove anthropic from anthropic_multimodal fileType since we support multiple providers now

🗂️ feat: Send Attachments Directly to Provider (Google) (#9100)

* feat: add validation for google PDFs and add google endpoint as a document supporting endpoint

* feat: add proper pdf formatting for google endpoints (requires PR #14 in agents)

* feat: add multimodal support for google endpoint attachments

* feat: add audio file svg

* fix: refactor attachments logic so multi-attachment messages work properly

* feat: add video file svg

* fix: allows for followup questions of uploaded multimodal attachments

* fix: remove incorrect final message filtering that was breaking Attachment component rendering

fix: manualy rename 'documents' to 'Documents' in git since it wasn't picked up due to case insensitivity in dir name

fix: add logic so filepicker for a google agent has proper filetype filtering

🛫 refactor: Move Encoding Logic to packages/api (#9182)

* refactor: move audio encode over to TS

* refactor: audio encoding now functional in LC again

* refactor: move video encode over to TS

* refactor: move document encode over to TS

* refactor: video encoding now functional in LC again

* refactor: document encoding now functional in LC again

* fix: extend file type options in AttachFileMenu to include 'google_multimodal' and update dependency array to include agent?.provider

* feat: only accept pdfs if responses api is enabled for openai convos

chore: address ESLint comments

chore: add missing audio mimetype

* fix: type safety for message content parts and improve null handling

* chore: reorder AttachFileMenuProps for consistency and clarity

* chore: import order in AttachFileMenu

* fix: improve null handling for text parts in parseTextParts function

* fix: remove no longer used unsupported capability error message for file uploads

* fix: OpenAI Direct File Attachment Format

* fix: update encodeAndFormatDocuments to support  OpenAI responses API and enhance document result types

* refactor: broaden providers supported for documents

* feat: enhance DragDrop context and modal to support document uploads based on provider capabilities

* fix: reorder import statements for consistency in video encoding module

---------

Co-authored-by: Dustin Healy <54083382+dustinhealy@users.noreply.github.com>
This commit is contained in:
Danny Avila 2025-10-06 17:30:16 -04:00 committed by GitHub
parent 9c77f53454
commit bcd97aad2f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
33 changed files with 1040 additions and 74 deletions

View file

@ -57,6 +57,27 @@ export const fullMimeTypesList = [
'application/zip',
'image/svg',
'image/svg+xml',
// Video formats
'video/mp4',
'video/avi',
'video/mov',
'video/wmv',
'video/flv',
'video/webm',
'video/mkv',
'video/m4v',
'video/3gp',
'video/ogv',
// Audio formats
'audio/mp3',
'audio/wav',
'audio/ogg',
'audio/m4a',
'audio/aac',
'audio/flac',
'audio/wma',
'audio/opus',
'audio/mpeg',
...excelFileTypes,
];
@ -123,7 +144,9 @@ export const applicationMimeTypes =
export const imageMimeTypes = /^image\/(jpeg|gif|png|webp|heic|heif)$/;
export const audioMimeTypes =
/^audio\/(mp3|mpeg|mpeg3|wav|wave|x-wav|ogg|vorbis|mp4|x-m4a|flac|x-flac|webm)$/;
/^audio\/(mp3|mpeg|mpeg3|wav|wave|x-wav|ogg|vorbis|mp4|m4a|x-m4a|flac|x-flac|webm|aac|wma|opus)$/;
export const videoMimeTypes = /^video\/(mp4|avi|mov|wmv|flv|webm|mkv|m4v|3gp|ogv)$/;
export const defaultOCRMimeTypes = [
imageMimeTypes,
@ -142,8 +165,9 @@ export const supportedMimeTypes = [
excelMimeTypes,
applicationMimeTypes,
imageMimeTypes,
videoMimeTypes,
audioMimeTypes,
/** Supported by LC Code Interpreter PAI */
/** Supported by LC Code Interpreter API */
/^image\/(svg|svg\+xml)$/,
];
@ -199,6 +223,13 @@ export const fileConfig = {
[EModelEndpoint.assistants]: assistantsFileConfig,
[EModelEndpoint.azureAssistants]: assistantsFileConfig,
[EModelEndpoint.agents]: assistantsFileConfig,
[EModelEndpoint.anthropic]: {
fileLimit: 10,
fileSizeLimit: defaultSizeLimit,
totalSizeLimit: defaultSizeLimit,
supportedMimeTypes,
disabled: false,
},
default: {
fileLimit: 10,
fileSizeLimit: defaultSizeLimit,

View file

@ -369,7 +369,7 @@ export function parseTextParts(
continue;
}
if (part.type === ContentTypes.TEXT) {
const textValue = typeof part.text === 'string' ? part.text : part.text.value;
const textValue = (typeof part.text === 'string' ? part.text : part.text?.value) || '';
if (
result.length > 0 &&

View file

@ -31,6 +31,61 @@ export enum EModelEndpoint {
gptPlugins = 'gptPlugins',
}
/** Mirrors `@librechat/agents` providers */
export enum Providers {
OPENAI = 'openAI',
ANTHROPIC = 'anthropic',
AZURE = 'azureOpenAI',
GOOGLE = 'google',
VERTEXAI = 'vertexai',
BEDROCK = 'bedrock',
BEDROCK_LEGACY = 'bedrock_legacy',
MISTRALAI = 'mistralai',
MISTRAL = 'mistral',
OLLAMA = 'ollama',
DEEPSEEK = 'deepseek',
OPENROUTER = 'openrouter',
XAI = 'xai',
}
/**
* Endpoints that support direct PDF processing in the agent system
*/
export const documentSupportedProviders = new Set<string>([
EModelEndpoint.anthropic,
EModelEndpoint.openAI,
EModelEndpoint.custom,
EModelEndpoint.azureOpenAI,
EModelEndpoint.google,
Providers.VERTEXAI,
Providers.MISTRALAI,
Providers.MISTRAL,
Providers.OLLAMA,
Providers.DEEPSEEK,
Providers.OPENROUTER,
Providers.XAI,
]);
const openAILikeProviders = new Set<string>([
Providers.OPENAI,
Providers.AZURE,
EModelEndpoint.custom,
Providers.MISTRALAI,
Providers.MISTRAL,
Providers.OLLAMA,
Providers.DEEPSEEK,
Providers.OPENROUTER,
Providers.XAI,
]);
export const isOpenAILikeProvider = (provider?: string | null): boolean => {
return openAILikeProviders.has(provider ?? '');
};
export const isDocumentSupportedProvider = (provider?: string | null): boolean => {
return documentSupportedProviders.has(provider ?? '');
};
export const paramEndpoints = new Set<EModelEndpoint | string>([
EModelEndpoint.agents,
EModelEndpoint.openAI,

View file

@ -475,10 +475,20 @@ export type ContentPart = (
) &
PartMetadata;
export type TextData = (Text & PartMetadata) | undefined;
export type TMessageContentParts =
| { type: ContentTypes.ERROR; text?: string | (Text & PartMetadata); error?: string }
| { type: ContentTypes.THINK; think: string | (Text & PartMetadata) }
| { type: ContentTypes.TEXT; text: string | (Text & PartMetadata); tool_call_ids?: string[] }
| {
type: ContentTypes.ERROR;
text?: string | TextData;
error?: string;
}
| { type: ContentTypes.THINK; think?: string | TextData }
| {
type: ContentTypes.TEXT;
text?: string | TextData;
tool_call_ids?: string[];
}
| {
type: ContentTypes.TOOL_CALL;
tool_call: (