🤖 feat: OpenAI Assistants v2 (initial support) (#2781)

* 🤖 Assistants V2 Support: Part 1

- Separated Azure Assistants to its own endpoint
- File Search / Vector Store integration is incomplete, but can toggle and use storage from playground
- Code Interpreter resource files can be added but not deleted
- GPT-4o is supported
- Many improvements to the Assistants Endpoint overall

data-provider v2 changes

copy existing route as v1

chore: rename new endpoint to reduce comparison operations and add new azure filesource

api: add azureAssistants part 1

force use of version for assistants/assistantsAzure

chore: switch name back to azureAssistants

refactor type version: string | number

Ensure assistants endpoints have version set

fix: isArchived type issue in ConversationListParams

refactor: update assistants mutations/queries with endpoint/version definitions, update Assistants Map structure

chore:  FilePreview component ExtendedFile type assertion

feat: isAssistantsEndpoint helper

chore: remove unused useGenerations

chore(buildTree): type issue

chore(Advanced): type issue (unused component, maybe in future)

first pass for multi-assistant endpoint rewrite

fix(listAssistants): pass params correctly

feat: list separate assistants by endpoint

fix(useTextarea): access assistantMap correctly

fix: assistant endpoint switching, resetting ID

fix: broken during rewrite, selecting assistant mention

fix: set/invalidate assistants endpoint query data correctly

feat: Fix issue with assistant ID not being reset correctly

getOpenAIClient helper function

feat: add toast for assistant deletion

fix: assistants delete right after create issue for azure

fix: assistant patching

refactor: actions to use getOpenAIClient

refactor: consolidate logic into helpers file

fix: issue where conversation data was not initially available

v1 chat support

refactor(spendTokens): only early return if completionTokens isNaN

fix(OpenAIClient): ensure spendTokens has all necessary params

refactor: route/controller logic

fix(assistants/initializeClient): use defaultHeaders field

fix: sanitize default operation id

chore: bump openai package

first pass v2 action service

feat: retroactive domain parsing for actions added via v1

feat: delete db records of actions/assistants on openai assistant deletion

chore: remove vision tools from v2 assistants

feat: v2 upload and delete assistant vision images

WIP first pass, thread attachments

fix: show assistant vision files (save local/firebase copy)

v2 image continue

fix: annotations

fix: refine annotations

show analyze as error if is no longer submitting before progress reaches 1 and show file_search as retrieval tool

fix: abort run, undefined endpoint issue

refactor: consolidate capabilities logic and anticipate versioning

frontend version 2 changes

fix: query selection and filter

add endpoint to unknown filepath

add file ids to resource, deleting in progress

enable/disable file search

remove version log

* 🤖 Assistants V2 Support: Part 2

🎹 fix: Autocompletion Chrome Bug on Action API Key Input

chore: remove `useOriginNavigate`

chore: set correct OpenAI Storage Source

fix: azure file deletions, instantiate clients by source for deletion

update code interpret files info

feat: deleteResourceFileId

chore: increase poll interval as azure easily rate limits

fix: openai file deletions, TODO: evaluate rejected deletion settled promises to determine which to delete from db records

file source icons

update table file filters

chore: file search info and versioning

fix: retrieval update with necessary tool_resources if specified

fix(useMentions): add optional chaining in case listMap value is undefined

fix: force assistant avatar roundedness

fix: azure assistants, check correct flag

chore: bump data-provider

* fix: merge conflict

* ci: fix backend tests due to new updates

* chore: update .env.example

* meilisearch improvements

* localization updates

* chore: update comparisons

* feat: add additional metadata: endpoint, author ID

* chore: azureAssistants ENDPOINTS exclusion warning
This commit is contained in:
Danny Avila 2024-05-19 12:56:55 -04:00 committed by GitHub
parent af8bcb08d6
commit 1a452121fa
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
158 changed files with 4184 additions and 1204 deletions

View file

@ -3,7 +3,6 @@ const { v4 } = require('uuid');
const {
Constants,
ContentTypes,
EModelEndpoint,
AnnotationTypes,
defaultOrderQuery,
} = require('librechat-data-provider');
@ -50,6 +49,7 @@ async function initThread({ openai, body, thread_id: _thread_id }) {
* @param {string} params.assistant_id - The current assistant Id.
* @param {string} params.thread_id - The thread Id.
* @param {string} params.conversationId - The message's conversationId
* @param {string} params.endpoint - The conversation endpoint
* @param {string} [params.parentMessageId] - Optional if initial message.
* Defaults to Constants.NO_PARENT.
* @param {string} [params.instructions] - Optional: from preset for `instructions` field.
@ -82,7 +82,7 @@ async function saveUserMessage(params) {
const userMessage = {
user: params.user,
endpoint: EModelEndpoint.assistants,
endpoint: params.endpoint,
messageId: params.messageId,
conversationId: params.conversationId,
parentMessageId: params.parentMessageId ?? Constants.NO_PARENT,
@ -96,7 +96,7 @@ async function saveUserMessage(params) {
};
const convo = {
endpoint: EModelEndpoint.assistants,
endpoint: params.endpoint,
conversationId: params.conversationId,
promptPrefix: params.promptPrefix,
instructions: params.instructions,
@ -126,6 +126,7 @@ async function saveUserMessage(params) {
* @param {string} params.model - The model used by the assistant.
* @param {ContentPart[]} params.content - The message content parts.
* @param {string} params.conversationId - The message's conversationId
* @param {string} params.endpoint - The conversation endpoint
* @param {string} params.parentMessageId - The latest user message that triggered this response.
* @param {string} [params.instructions] - Optional: from preset for `instructions` field.
* Overrides the instructions of the assistant.
@ -145,7 +146,7 @@ async function saveAssistantMessage(params) {
const message = await recordMessage({
user: params.user,
endpoint: EModelEndpoint.assistants,
endpoint: params.endpoint,
messageId: params.messageId,
conversationId: params.conversationId,
parentMessageId: params.parentMessageId,
@ -160,7 +161,7 @@ async function saveAssistantMessage(params) {
});
await saveConvo(params.user, {
endpoint: EModelEndpoint.assistants,
endpoint: params.endpoint,
conversationId: params.conversationId,
promptPrefix: params.promptPrefix,
instructions: params.instructions,
@ -205,20 +206,22 @@ async function addThreadMetadata({ openai, thread_id, messageId, messages }) {
*
* @param {Object} params - The parameters for synchronizing messages.
* @param {OpenAIClient} params.openai - The OpenAI client instance.
* @param {string} params.endpoint - The current endpoint.
* @param {string} params.thread_id - The current thread ID.
* @param {TMessage[]} params.dbMessages - The LibreChat DB messages.
* @param {ThreadMessage[]} params.apiMessages - The thread messages from the API.
* @param {string} params.conversationId - The current conversation ID.
* @param {string} params.thread_id - The current thread ID.
* @param {string} [params.assistant_id] - The current assistant ID.
* @param {string} params.conversationId - The current conversation ID.
* @return {Promise<TMessage[]>} A promise that resolves to the updated messages
*/
async function syncMessages({
openai,
apiMessages,
dbMessages,
conversationId,
endpoint,
thread_id,
dbMessages,
apiMessages,
assistant_id,
conversationId,
}) {
let result = [];
let dbMessageMap = new Map(dbMessages.map((msg) => [msg.messageId, msg]));
@ -290,7 +293,7 @@ async function syncMessages({
thread_id,
conversationId,
messageId: v4(),
endpoint: EModelEndpoint.assistants,
endpoint,
parentMessageId: lastMessage ? lastMessage.messageId : Constants.NO_PARENT,
role: apiMessage.role,
isCreatedByUser: apiMessage.role === 'user',
@ -382,13 +385,21 @@ function mapMessagesToSteps(steps, messages) {
*
* @param {Object} params - The parameters for initializing a thread.
* @param {OpenAIClient} params.openai - The OpenAI client instance.
* @param {string} params.endpoint - The current endpoint.
* @param {string} [params.latestMessageId] - Optional: The latest message ID from LibreChat.
* @param {string} params.thread_id - Response thread ID.
* @param {string} params.run_id - Response Run ID.
* @param {string} params.conversationId - LibreChat conversation ID.
* @return {Promise<TMessage[]>} A promise that resolves to the updated messages
*/
async function checkMessageGaps({ openai, latestMessageId, thread_id, run_id, conversationId }) {
async function checkMessageGaps({
openai,
endpoint,
latestMessageId,
thread_id,
run_id,
conversationId,
}) {
const promises = [];
promises.push(openai.beta.threads.messages.list(thread_id, defaultOrderQuery));
promises.push(openai.beta.threads.runs.steps.list(thread_id, run_id));
@ -406,6 +417,7 @@ async function checkMessageGaps({ openai, latestMessageId, thread_id, run_id, co
role: 'assistant',
run_id,
thread_id,
endpoint,
metadata: {
messageId: latestMessageId,
},
@ -452,11 +464,12 @@ async function checkMessageGaps({ openai, latestMessageId, thread_id, run_id, co
const syncedMessages = await syncMessages({
openai,
endpoint,
thread_id,
dbMessages,
apiMessages,
thread_id,
conversationId,
assistant_id,
conversationId,
});
return Object.values(
@ -498,41 +511,62 @@ const recordUsage = async ({
};
/**
* Safely replaces the annotated text within the specified range denoted by start_index and end_index,
* after verifying that the text within that range matches the given annotation text.
* Proceeds with the replacement even if a mismatch is found, but logs a warning.
* Creates a replaceAnnotation function with internal state for tracking the index offset.
*
* @param {string} originalText The original text content.
* @param {number} start_index The starting index where replacement should begin.
* @param {number} end_index The ending index where replacement should end.
* @param {string} expectedText The text expected to be found in the specified range.
* @param {string} replacementText The text to insert in place of the existing content.
* @returns {string} The text with the replacement applied, regardless of text match.
* @returns {function} The replaceAnnotation function with closure for index offset.
*/
function replaceAnnotation(originalText, start_index, end_index, expectedText, replacementText) {
if (start_index < 0 || end_index > originalText.length || start_index > end_index) {
logger.warn(`Invalid range specified for annotation replacement.
Attempting replacement with \`replace\` method instead...
length: ${originalText.length}
start_index: ${start_index}
end_index: ${end_index}`);
return originalText.replace(originalText, replacementText);
function createReplaceAnnotation() {
let indexOffset = 0;
/**
* Safely replaces the annotated text within the specified range denoted by start_index and end_index,
* after verifying that the text within that range matches the given annotation text.
* Proceeds with the replacement even if a mismatch is found, but logs a warning.
*
* @param {object} params The original text content.
* @param {string} params.currentText The current text content, with/without replacements.
* @param {number} params.start_index The starting index where replacement should begin.
* @param {number} params.end_index The ending index where replacement should end.
* @param {string} params.expectedText The text expected to be found in the specified range.
* @param {string} params.replacementText The text to insert in place of the existing content.
* @returns {string} The text with the replacement applied, regardless of text match.
*/
function replaceAnnotation({
currentText,
start_index,
end_index,
expectedText,
replacementText,
}) {
const adjustedStartIndex = start_index + indexOffset;
const adjustedEndIndex = end_index + indexOffset;
if (
adjustedStartIndex < 0 ||
adjustedEndIndex > currentText.length ||
adjustedStartIndex > adjustedEndIndex
) {
logger.warn(`Invalid range specified for annotation replacement.
Attempting replacement with \`replace\` method instead...
length: ${currentText.length}
start_index: ${adjustedStartIndex}
end_index: ${adjustedEndIndex}`);
return currentText.replace(expectedText, replacementText);
}
if (currentText.substring(adjustedStartIndex, adjustedEndIndex) !== expectedText) {
return currentText.replace(expectedText, replacementText);
}
indexOffset += replacementText.length - (adjustedEndIndex - adjustedStartIndex);
return (
currentText.slice(0, adjustedStartIndex) +
replacementText +
currentText.slice(adjustedEndIndex)
);
}
const actualTextInRange = originalText.substring(start_index, end_index);
if (actualTextInRange !== expectedText) {
logger.warn(`The text within the specified range does not match the expected annotation text.
Attempting replacement with \`replace\` method instead...
Expected: ${expectedText}
Actual: ${actualTextInRange}`);
return originalText.replace(originalText, replacementText);
}
const beforeText = originalText.substring(0, start_index);
const afterText = originalText.substring(end_index);
return beforeText + replacementText + afterText;
return replaceAnnotation;
}
/**
@ -581,6 +615,11 @@ async function processMessages({ openai, client, messages = [] }) {
continue;
}
const originalText = currentText;
text += originalText;
const replaceAnnotation = createReplaceAnnotation();
logger.debug('[processMessages] Processing annotations:', annotations);
for (const annotation of annotations) {
let file;
@ -589,14 +628,16 @@ async function processMessages({ openai, client, messages = [] }) {
const file_id = annotationType?.file_id;
const alreadyProcessed = client.processedFileIds.has(file_id);
const replaceCurrentAnnotation = (replacement = '') => {
currentText = replaceAnnotation(
const replaceCurrentAnnotation = (replacementText = '') => {
const { start_index, end_index, text: expectedText } = annotation;
currentText = replaceAnnotation({
originalText,
currentText,
annotation.start_index,
annotation.end_index,
annotation.text,
replacement,
);
start_index,
end_index,
expectedText,
replacementText,
});
edited = true;
};
@ -623,7 +664,7 @@ async function processMessages({ openai, client, messages = [] }) {
replaceCurrentAnnotation(`^${sources.length}^`);
}
text += currentText + ' ';
text = currentText;
if (!file) {
continue;