feat: Message Rate Limiters, Violation Logging, & Ban System 🔨 (#903)
* refactor: require Auth middleware in route index files
* feat: concurrent message limiter
* feat: complete concurrent message limiter with caching
* refactor: SSE response methods separated from handleText
* fix(abortMiddleware): fix req and res order to standard, use endpointOption in req.body
* chore: minor name changes
* refactor: add isUUID condition to saveMessage
* fix(concurrentLimiter): logic correctly handles the max number of concurrent messages and res closing/finalization
* chore: bump keyv and remove console.log from Message
* fix(concurrentLimiter): ensure messages are only saved in later message children
* refactor(concurrentLimiter): use KeyvFile instead, could make other stores configurable in the future
* feat: add denyRequest function for error responses
* feat(utils): add isStringTruthy function
Introduce the isStringTruthy function to the utilities module to check if a string value is a case-insensitive match for 'true'
* feat: add optional message rate limiters by IP and userId
* feat: add optional message rate limiters by IP and userId to edit route
* refactor: rename isStringTruthy to isTrue for brevity
* refactor(getError): use map to make code cleaner
* refactor: use memory for concurrent rate limiter to prevent clearing on startup/exit, add multiple log files, fix error message for concurrent violation
* feat: check if errorMessage is object, stringify if so
* chore: send object to denyRequest which will stringify it
* feat: log excessive requests
* fix(getError): correctly pluralize messages
* refactor(limiters): make type consistent between logs and errorMessage
* refactor(cache): move files out of lib/db into separate cache dir
>> feat: add getLogStores function so Keyv instance is not redundantly created on every violation
feat: separate violation logging to own function with logViolation
* fix: cache/index.js export, properly record userViolations
* refactor(messageLimiters): use new logging method, add logging to registrations
* refactor(logViolation): make userLogs an array of logs per user
* feat: add logging to login limiter
* refactor: pass req as first param to logViolation and record offending IP
* refactor: rename isTrue helper fn to isEnabled
* feat: add simple non_browser check and log violation
* fix: open handles in unit tests, remove KeyvMongo as not used and properly mock global fetch
* chore: adjust nodemon ignore paths to properly ignore logs
* feat: add math helper function for safe use of eval
* refactor(api/convos): use middleware at top of file to avoid redundancy
* feat: add delete all static method for Sessions
* fix: redirect to login on refresh if user is not found, or the session is not found but hasn't expired (ban case)
* refactor(getLogStores): adjust return type
* feat: add ban violation and check ban logic
refactor(logViolation): pass both req and res objects
* feat: add removePorts helper function
* refactor: rename getError to getMessageError and add getLoginError for displaying different login errors
* fix(AuthContext): fix type issue and remove unused code
* refactor(bans): ban by ip and user id, send response based on origin
* chore: add frontend ban messages
* refactor(routes/oauth): add ban check to handler, also consolidate logic to avoid redundancy
* feat: add ban check to AI messaging routes
* feat: add ban check to login/registration
* fix(ci/api): mock KeyvMongo to avoid tests hanging
* docs: update .env.example
> refactor(banViolation): calculate interval rate crossover, early return if duration is invalid
ci(banViolation): add tests to ensure users are only banned when expected
* docs: improve wording for mod system
* feat: add configurable env variables for violation scores
* chore: add jsdoc for uaParser.js
* chore: improve ban text log
* chore: update bun test scripts
* refactor(math.js): add fallback values
* fix(KeyvMongo/banLogs): refactor keyv instances to top of files to avoid memory leaks, refactor ban logic to use getLogStores instead
refactor(getLogStores): get a single log store by type
* fix(ci): refactor tests due to banLogs changes, also make sure to clear and revoke sessions even if ban duration is 0
* fix(banViolation.js): getLogStores import
* feat: handle 500 code error at login
* fix(middleware): handle case where user.id is _id and not just id
* ci: add ban secrets for backend unit tests
* refactor: logout user upon ban
* chore: log session delete message only if deletedCount > 0
* refactor: change default ban duration (2h) and make logic more clear in JSDOC
* fix: login and registration limiters will now return rate limiting error
* fix: userId not parsable as non ObjectId string
* feat: add useTimeout hook to properly clear timeouts when invoking functions within them
refactor(AuthContext): cleanup code by using new hook and defining types in ~/common
* fix: login error message for rate limits
* docs: add info for automated mod system and rate limiters, update other docs accordingly
* chore: bump data-provider version
2023-09-13 10:57:07 -04:00
|
|
|
const { z } = require('zod');
|
2023-03-22 01:31:01 -04:00
|
|
|
const Message = require('./schema/messageSchema');
|
2023-12-14 07:49:27 -05:00
|
|
|
const logger = require('~/config/winston');
|
2023-05-09 23:51:39 +02:00
|
|
|
|
feat: Message Rate Limiters, Violation Logging, & Ban System 🔨 (#903)
* refactor: require Auth middleware in route index files
* feat: concurrent message limiter
* feat: complete concurrent message limiter with caching
* refactor: SSE response methods separated from handleText
* fix(abortMiddleware): fix req and res order to standard, use endpointOption in req.body
* chore: minor name changes
* refactor: add isUUID condition to saveMessage
* fix(concurrentLimiter): logic correctly handles the max number of concurrent messages and res closing/finalization
* chore: bump keyv and remove console.log from Message
* fix(concurrentLimiter): ensure messages are only saved in later message children
* refactor(concurrentLimiter): use KeyvFile instead, could make other stores configurable in the future
* feat: add denyRequest function for error responses
* feat(utils): add isStringTruthy function
Introduce the isStringTruthy function to the utilities module to check if a string value is a case-insensitive match for 'true'
* feat: add optional message rate limiters by IP and userId
* feat: add optional message rate limiters by IP and userId to edit route
* refactor: rename isStringTruthy to isTrue for brevity
* refactor(getError): use map to make code cleaner
* refactor: use memory for concurrent rate limiter to prevent clearing on startup/exit, add multiple log files, fix error message for concurrent violation
* feat: check if errorMessage is object, stringify if so
* chore: send object to denyRequest which will stringify it
* feat: log excessive requests
* fix(getError): correctly pluralize messages
* refactor(limiters): make type consistent between logs and errorMessage
* refactor(cache): move files out of lib/db into separate cache dir
>> feat: add getLogStores function so Keyv instance is not redundantly created on every violation
feat: separate violation logging to own function with logViolation
* fix: cache/index.js export, properly record userViolations
* refactor(messageLimiters): use new logging method, add logging to registrations
* refactor(logViolation): make userLogs an array of logs per user
* feat: add logging to login limiter
* refactor: pass req as first param to logViolation and record offending IP
* refactor: rename isTrue helper fn to isEnabled
* feat: add simple non_browser check and log violation
* fix: open handles in unit tests, remove KeyvMongo as not used and properly mock global fetch
* chore: adjust nodemon ignore paths to properly ignore logs
* feat: add math helper function for safe use of eval
* refactor(api/convos): use middleware at top of file to avoid redundancy
* feat: add delete all static method for Sessions
* fix: redirect to login on refresh if user is not found, or the session is not found but hasn't expired (ban case)
* refactor(getLogStores): adjust return type
* feat: add ban violation and check ban logic
refactor(logViolation): pass both req and res objects
* feat: add removePorts helper function
* refactor: rename getError to getMessageError and add getLoginError for displaying different login errors
* fix(AuthContext): fix type issue and remove unused code
* refactor(bans): ban by ip and user id, send response based on origin
* chore: add frontend ban messages
* refactor(routes/oauth): add ban check to handler, also consolidate logic to avoid redundancy
* feat: add ban check to AI messaging routes
* feat: add ban check to login/registration
* fix(ci/api): mock KeyvMongo to avoid tests hanging
* docs: update .env.example
> refactor(banViolation): calculate interval rate crossover, early return if duration is invalid
ci(banViolation): add tests to ensure users are only banned when expected
* docs: improve wording for mod system
* feat: add configurable env variables for violation scores
* chore: add jsdoc for uaParser.js
* chore: improve ban text log
* chore: update bun test scripts
* refactor(math.js): add fallback values
* fix(KeyvMongo/banLogs): refactor keyv instances to top of files to avoid memory leaks, refactor ban logic to use getLogStores instead
refactor(getLogStores): get a single log store by type
* fix(ci): refactor tests due to banLogs changes, also make sure to clear and revoke sessions even if ban duration is 0
* fix(banViolation.js): getLogStores import
* feat: handle 500 code error at login
* fix(middleware): handle case where user.id is _id and not just id
* ci: add ban secrets for backend unit tests
* refactor: logout user upon ban
* chore: log session delete message only if deletedCount > 0
* refactor: change default ban duration (2h) and make logic more clear in JSDOC
* fix: login and registration limiters will now return rate limiting error
* fix: userId not parsable as non ObjectId string
* feat: add useTimeout hook to properly clear timeouts when invoking functions within them
refactor(AuthContext): cleanup code by using new hook and defining types in ~/common
* fix: login error message for rate limits
* docs: add info for automated mod system and rate limiters, update other docs accordingly
* chore: bump data-provider version
2023-09-13 10:57:07 -04:00
|
|
|
const idSchema = z.string().uuid();
|
|
|
|
|
2023-02-05 23:05:07 -05:00
|
|
|
module.exports = {
|
2023-03-16 16:22:08 -04:00
|
|
|
Message,
|
2023-05-18 11:09:31 -07:00
|
|
|
|
2023-05-09 23:51:39 +02:00
|
|
|
async saveMessage({
|
2023-09-18 15:19:50 -04:00
|
|
|
user,
|
2023-04-04 01:10:50 +08:00
|
|
|
messageId,
|
|
|
|
newMessageId,
|
|
|
|
conversationId,
|
|
|
|
parentMessageId,
|
|
|
|
sender,
|
|
|
|
text,
|
2023-12-30 14:34:32 -05:00
|
|
|
isCreatedByUser,
|
2023-04-11 03:26:38 +08:00
|
|
|
error,
|
|
|
|
unfinished,
|
feat: Vision Support + New UI (#1203)
* feat: add timer duration to showToast, show toast for preset selection
* refactor: replace old /chat/ route with /c/. e2e tests will fail here
* refactor: move typedefs to root of /api/ and add a few to assistant types in TS
* refactor: reorganize data-provider imports, fix dependency cycle, strategize new plan to separate react dependent packages
* feat: add dataService for uploading images
* feat(data-provider): add mutation keys
* feat: file resizing and upload
* WIP: initial API image handling
* fix: catch JSON.parse of localStorage tools
* chore: experimental: use module-alias for absolute imports
* refactor: change temp_file_id strategy
* fix: updating files state by using Map and defining react query callbacks in a way that keeps them during component unmount, initial delete handling
* feat: properly handle file deletion
* refactor: unexpose complete filepath and resize from server for higher fidelity
* fix: make sure resized height, width is saved, catch bad requests
* refactor: use absolute imports
* fix: prevent setOptions from being called more than once for OpenAIClient, made note to fix for PluginsClient
* refactor: import supportsFiles and models vars from schemas
* fix: correctly replace temp file id
* refactor(BaseClient): use absolute imports, pass message 'opts' to buildMessages method, count tokens for nested objects/arrays
* feat: add validateVisionModel to determine if model has vision capabilities
* chore(checkBalance): update jsdoc
* feat: formatVisionMessage: change message content format dependent on role and image_urls passed
* refactor: add usage to File schema, make create and updateFile, correctly set and remove TTL
* feat: working vision support
TODO: file size, type, amount validations, making sure they are styled right, and making sure you can add images from the clipboard/dragging
* feat: clipboard support for uploading images
* feat: handle files on drop to screen, refactor top level view code to Presentation component so the useDragHelpers hook has ChatContext
* fix(Images): replace uploaded images in place
* feat: add filepath validation to protect sensitive files
* fix: ensure correct file_ids are push and not the Map key values
* fix(ToastContext): type issue
* feat: add basic file validation
* fix(useDragHelpers): correct context issue with `files` dependency
* refactor: consolidate setErrors logic to setError
* feat: add dialog Image overlay on image click
* fix: close endpoints menu on click
* chore: set detail to auto, make note for configuration
* fix: react warning (button desc. of button)
* refactor: optimize filepath handling, pass file_ids to images for easier re-use
* refactor: optimize image file handling, allow re-using files in regen, pass more file metadata in messages
* feat: lazy loading images including use of upload preview
* fix: SetKeyDialog closing, stopPropagation on Dialog content click
* style(EndpointMenuItem): tighten up the style, fix dark theme showing in lightmode, make menu more ux friendly
* style: change maxheight of all settings textareas to 138px from 300px
* style: better styling for textarea and enclosing buttons
* refactor(PresetItems): swap back edit and delete icons
* feat: make textarea placeholder dynamic to endpoint
* style: show user hover buttons only on hover when message is streaming
* fix: ordered list not going past 9, fix css
* feat: add User/AI labels; style: hide loading spinner
* feat: add back custom footer, change original footer text
* feat: dynamic landing icons based on endpoint
* chore: comment out assistants route
* fix: autoScroll to newest on /c/ view
* fix: Export Conversation on new UI
* style: match message style of official more closely
* ci: fix api jest unit tests, comment out e2e tests for now as they will fail until addressed
* feat: more file validation and use blob in preview field, not filepath, to fix temp deletion
* feat: filefilter for multer
* feat: better AI labels based on custom name, model, and endpoint instead of `ChatGPT`
2023-11-21 20:12:48 -05:00
|
|
|
files,
|
2023-12-30 14:34:32 -05:00
|
|
|
isEdited,
|
|
|
|
finish_reason,
|
|
|
|
tokenCount,
|
|
|
|
plugin,
|
|
|
|
plugins,
|
|
|
|
model,
|
2023-05-09 23:51:39 +02:00
|
|
|
}) {
|
2023-02-08 00:02:29 -05:00
|
|
|
try {
|
feat: Message Rate Limiters, Violation Logging, & Ban System 🔨 (#903)
* refactor: require Auth middleware in route index files
* feat: concurrent message limiter
* feat: complete concurrent message limiter with caching
* refactor: SSE response methods separated from handleText
* fix(abortMiddleware): fix req and res order to standard, use endpointOption in req.body
* chore: minor name changes
* refactor: add isUUID condition to saveMessage
* fix(concurrentLimiter): logic correctly handles the max number of concurrent messages and res closing/finalization
* chore: bump keyv and remove console.log from Message
* fix(concurrentLimiter): ensure messages are only saved in later message children
* refactor(concurrentLimiter): use KeyvFile instead, could make other stores configurable in the future
* feat: add denyRequest function for error responses
* feat(utils): add isStringTruthy function
Introduce the isStringTruthy function to the utilities module to check if a string value is a case-insensitive match for 'true'
* feat: add optional message rate limiters by IP and userId
* feat: add optional message rate limiters by IP and userId to edit route
* refactor: rename isStringTruthy to isTrue for brevity
* refactor(getError): use map to make code cleaner
* refactor: use memory for concurrent rate limiter to prevent clearing on startup/exit, add multiple log files, fix error message for concurrent violation
* feat: check if errorMessage is object, stringify if so
* chore: send object to denyRequest which will stringify it
* feat: log excessive requests
* fix(getError): correctly pluralize messages
* refactor(limiters): make type consistent between logs and errorMessage
* refactor(cache): move files out of lib/db into separate cache dir
>> feat: add getLogStores function so Keyv instance is not redundantly created on every violation
feat: separate violation logging to own function with logViolation
* fix: cache/index.js export, properly record userViolations
* refactor(messageLimiters): use new logging method, add logging to registrations
* refactor(logViolation): make userLogs an array of logs per user
* feat: add logging to login limiter
* refactor: pass req as first param to logViolation and record offending IP
* refactor: rename isTrue helper fn to isEnabled
* feat: add simple non_browser check and log violation
* fix: open handles in unit tests, remove KeyvMongo as not used and properly mock global fetch
* chore: adjust nodemon ignore paths to properly ignore logs
* feat: add math helper function for safe use of eval
* refactor(api/convos): use middleware at top of file to avoid redundancy
* feat: add delete all static method for Sessions
* fix: redirect to login on refresh if user is not found, or the session is not found but hasn't expired (ban case)
* refactor(getLogStores): adjust return type
* feat: add ban violation and check ban logic
refactor(logViolation): pass both req and res objects
* feat: add removePorts helper function
* refactor: rename getError to getMessageError and add getLoginError for displaying different login errors
* fix(AuthContext): fix type issue and remove unused code
* refactor(bans): ban by ip and user id, send response based on origin
* chore: add frontend ban messages
* refactor(routes/oauth): add ban check to handler, also consolidate logic to avoid redundancy
* feat: add ban check to AI messaging routes
* feat: add ban check to login/registration
* fix(ci/api): mock KeyvMongo to avoid tests hanging
* docs: update .env.example
> refactor(banViolation): calculate interval rate crossover, early return if duration is invalid
ci(banViolation): add tests to ensure users are only banned when expected
* docs: improve wording for mod system
* feat: add configurable env variables for violation scores
* chore: add jsdoc for uaParser.js
* chore: improve ban text log
* chore: update bun test scripts
* refactor(math.js): add fallback values
* fix(KeyvMongo/banLogs): refactor keyv instances to top of files to avoid memory leaks, refactor ban logic to use getLogStores instead
refactor(getLogStores): get a single log store by type
* fix(ci): refactor tests due to banLogs changes, also make sure to clear and revoke sessions even if ban duration is 0
* fix(banViolation.js): getLogStores import
* feat: handle 500 code error at login
* fix(middleware): handle case where user.id is _id and not just id
* ci: add ban secrets for backend unit tests
* refactor: logout user upon ban
* chore: log session delete message only if deletedCount > 0
* refactor: change default ban duration (2h) and make logic more clear in JSDOC
* fix: login and registration limiters will now return rate limiting error
* fix: userId not parsable as non ObjectId string
* feat: add useTimeout hook to properly clear timeouts when invoking functions within them
refactor(AuthContext): cleanup code by using new hook and defining types in ~/common
* fix: login error message for rate limits
* docs: add info for automated mod system and rate limiters, update other docs accordingly
* chore: bump data-provider version
2023-09-13 10:57:07 -04:00
|
|
|
const validConvoId = idSchema.safeParse(conversationId);
|
|
|
|
if (!validConvoId.success) {
|
|
|
|
return;
|
refactor: Encrypt & Expire User Provided Keys, feat: Rate Limiting (#874)
* docs: make_your_own.md formatting fix for mkdocs
* feat: add express-mongo-sanitize
feat: add login/registration rate limiting
* chore: remove unnecessary console log
* wip: remove token handling from localStorage to encrypted DB solution
* refactor: minor change to UserService
* fix mongo query and add keys route to server
* fix backend controllers and simplify schema/crud
* refactor: rename token to key to separate from access/refresh tokens, setTokenDialog -> setKeyDialog
* refactor(schemas): TEndpointOption token -> key
* refactor(api): use new encrypted key retrieval system
* fix(SetKeyDialog): fix key prop error
* fix(abortMiddleware): pass random UUID if messageId is not generated yet for proper error display on frontend
* fix(getUserKey): wrong prop passed in arg, adds error handling
* fix: prevent message without conversationId from saving to DB, prevents branching on the frontend to a new top-level branch
* refactor: change wording of multiple display messages
* refactor(checkExpiry -> checkUserKeyExpiry): move to UserService file
* fix: type imports from common
* refactor(SubmitButton): convert to TS
* refactor(key.ts): change localStorage map key name
* refactor: add new custom tailwind classes to better match openAI colors
* chore: remove unnecessary warning and catch ScreenShot error
* refactor: move userKey frontend logic to hooks and remove use of localStorage and instead query the DB
* refactor: invalidate correct query key, memoize userKey hook, conditionally render SetKeyDialog to avoid unnecessary calls, refactor SubmitButton props and useEffect for showing 'provide key first'
* fix(SetKeyDialog): use enum-like object for expiry values
feat(Dropdown): add optionsClassName to dynamically change dropdown options container classes
* fix: handle edge case where user had provided a key but the server changes to env variable for keys
* refactor(OpenAI/titleConvo): move titling to client to retain authorized credentials in message lifecycle for titling
* fix(azure): handle user_provided keys correctly for azure
* feat: send user Id to OpenAI to differentiate users in completion requests
* refactor(OpenAI/titleConvo): adding tokens helps minimize LLM from using the language in title response
* feat: add delete endpoint for keys
* chore: remove throttling of title
* feat: add 'Data controls' to Settings, add 'Revoke' keys feature in Key Dialog and Data controls
* refactor: reorganize PluginsClient files in langchain format
* feat: use langchain for titling convos
* chore: cleanup titling convo, with fallback to original method, escape braces, use only snippet for language detection
* refactor: move helper functions to appropriate langchain folders for reusability
* fix: userProvidesKey handling for gptPlugins
* fix: frontend handling of plugins key
* chore: cleanup logging and ts-ignore SSE
* fix: forwardRef misuse in DangerButton
* fix(GoogleConfig/FileUpload): localize errors and simplify validation with zod
* fix: cleanup google logging and fix user provided key handling
* chore: remove titling from google
* chore: removing logging from browser endpoint
* wip: fix menu flicker
* feat: useLocalStorage hook
* feat: add Tooltip for UI
* refactor(EndpointMenu): utilize Tooltip and useLocalStorage, remove old 'New Chat' slide-over
* fix(e2e): use testId for endpoint menu trigger
* chore: final touches to EndpointMenu before future refactor to declutter component
* refactor(localization): change select endpoint to open menu and add translations
* chore: add final prop to error message response
* ci: minor edits to facilitate testing
* ci: new e2e test which tests for new key setting/revoking features
2023-09-06 10:46:27 -04:00
|
|
|
}
|
feat: Vision Support + New UI (#1203)
* feat: add timer duration to showToast, show toast for preset selection
* refactor: replace old /chat/ route with /c/. e2e tests will fail here
* refactor: move typedefs to root of /api/ and add a few to assistant types in TS
* refactor: reorganize data-provider imports, fix dependency cycle, strategize new plan to separate react dependent packages
* feat: add dataService for uploading images
* feat(data-provider): add mutation keys
* feat: file resizing and upload
* WIP: initial API image handling
* fix: catch JSON.parse of localStorage tools
* chore: experimental: use module-alias for absolute imports
* refactor: change temp_file_id strategy
* fix: updating files state by using Map and defining react query callbacks in a way that keeps them during component unmount, initial delete handling
* feat: properly handle file deletion
* refactor: unexpose complete filepath and resize from server for higher fidelity
* fix: make sure resized height, width is saved, catch bad requests
* refactor: use absolute imports
* fix: prevent setOptions from being called more than once for OpenAIClient, made note to fix for PluginsClient
* refactor: import supportsFiles and models vars from schemas
* fix: correctly replace temp file id
* refactor(BaseClient): use absolute imports, pass message 'opts' to buildMessages method, count tokens for nested objects/arrays
* feat: add validateVisionModel to determine if model has vision capabilities
* chore(checkBalance): update jsdoc
* feat: formatVisionMessage: change message content format dependent on role and image_urls passed
* refactor: add usage to File schema, make create and updateFile, correctly set and remove TTL
* feat: working vision support
TODO: file size, type, amount validations, making sure they are styled right, and making sure you can add images from the clipboard/dragging
* feat: clipboard support for uploading images
* feat: handle files on drop to screen, refactor top level view code to Presentation component so the useDragHelpers hook has ChatContext
* fix(Images): replace uploaded images in place
* feat: add filepath validation to protect sensitive files
* fix: ensure correct file_ids are push and not the Map key values
* fix(ToastContext): type issue
* feat: add basic file validation
* fix(useDragHelpers): correct context issue with `files` dependency
* refactor: consolidate setErrors logic to setError
* feat: add dialog Image overlay on image click
* fix: close endpoints menu on click
* chore: set detail to auto, make note for configuration
* fix: react warning (button desc. of button)
* refactor: optimize filepath handling, pass file_ids to images for easier re-use
* refactor: optimize image file handling, allow re-using files in regen, pass more file metadata in messages
* feat: lazy loading images including use of upload preview
* fix: SetKeyDialog closing, stopPropagation on Dialog content click
* style(EndpointMenuItem): tighten up the style, fix dark theme showing in lightmode, make menu more ux friendly
* style: change maxheight of all settings textareas to 138px from 300px
* style: better styling for textarea and enclosing buttons
* refactor(PresetItems): swap back edit and delete icons
* feat: make textarea placeholder dynamic to endpoint
* style: show user hover buttons only on hover when message is streaming
* fix: ordered list not going past 9, fix css
* feat: add User/AI labels; style: hide loading spinner
* feat: add back custom footer, change original footer text
* feat: dynamic landing icons based on endpoint
* chore: comment out assistants route
* fix: autoScroll to newest on /c/ view
* fix: Export Conversation on new UI
* style: match message style of official more closely
* ci: fix api jest unit tests, comment out e2e tests for now as they will fail until addressed
* feat: more file validation and use blob in preview field, not filepath, to fix temp deletion
* feat: filefilter for multer
* feat: better AI labels based on custom name, model, and endpoint instead of `ChatGPT`
2023-11-21 20:12:48 -05:00
|
|
|
|
|
|
|
const update = {
|
|
|
|
user,
|
|
|
|
messageId: newMessageId || messageId,
|
|
|
|
conversationId,
|
|
|
|
parentMessageId,
|
|
|
|
sender,
|
|
|
|
text,
|
|
|
|
isCreatedByUser,
|
|
|
|
isEdited,
|
|
|
|
finish_reason,
|
|
|
|
error,
|
|
|
|
unfinished,
|
|
|
|
tokenCount,
|
|
|
|
plugin,
|
|
|
|
plugins,
|
|
|
|
model,
|
|
|
|
};
|
|
|
|
|
|
|
|
if (files) {
|
|
|
|
update.files = files;
|
|
|
|
}
|
2023-04-09 11:17:08 -04:00
|
|
|
// may also need to update the conversation here
|
feat: Vision Support + New UI (#1203)
* feat: add timer duration to showToast, show toast for preset selection
* refactor: replace old /chat/ route with /c/. e2e tests will fail here
* refactor: move typedefs to root of /api/ and add a few to assistant types in TS
* refactor: reorganize data-provider imports, fix dependency cycle, strategize new plan to separate react dependent packages
* feat: add dataService for uploading images
* feat(data-provider): add mutation keys
* feat: file resizing and upload
* WIP: initial API image handling
* fix: catch JSON.parse of localStorage tools
* chore: experimental: use module-alias for absolute imports
* refactor: change temp_file_id strategy
* fix: updating files state by using Map and defining react query callbacks in a way that keeps them during component unmount, initial delete handling
* feat: properly handle file deletion
* refactor: unexpose complete filepath and resize from server for higher fidelity
* fix: make sure resized height, width is saved, catch bad requests
* refactor: use absolute imports
* fix: prevent setOptions from being called more than once for OpenAIClient, made note to fix for PluginsClient
* refactor: import supportsFiles and models vars from schemas
* fix: correctly replace temp file id
* refactor(BaseClient): use absolute imports, pass message 'opts' to buildMessages method, count tokens for nested objects/arrays
* feat: add validateVisionModel to determine if model has vision capabilities
* chore(checkBalance): update jsdoc
* feat: formatVisionMessage: change message content format dependent on role and image_urls passed
* refactor: add usage to File schema, make create and updateFile, correctly set and remove TTL
* feat: working vision support
TODO: file size, type, amount validations, making sure they are styled right, and making sure you can add images from the clipboard/dragging
* feat: clipboard support for uploading images
* feat: handle files on drop to screen, refactor top level view code to Presentation component so the useDragHelpers hook has ChatContext
* fix(Images): replace uploaded images in place
* feat: add filepath validation to protect sensitive files
* fix: ensure correct file_ids are push and not the Map key values
* fix(ToastContext): type issue
* feat: add basic file validation
* fix(useDragHelpers): correct context issue with `files` dependency
* refactor: consolidate setErrors logic to setError
* feat: add dialog Image overlay on image click
* fix: close endpoints menu on click
* chore: set detail to auto, make note for configuration
* fix: react warning (button desc. of button)
* refactor: optimize filepath handling, pass file_ids to images for easier re-use
* refactor: optimize image file handling, allow re-using files in regen, pass more file metadata in messages
* feat: lazy loading images including use of upload preview
* fix: SetKeyDialog closing, stopPropagation on Dialog content click
* style(EndpointMenuItem): tighten up the style, fix dark theme showing in lightmode, make menu more ux friendly
* style: change maxheight of all settings textareas to 138px from 300px
* style: better styling for textarea and enclosing buttons
* refactor(PresetItems): swap back edit and delete icons
* feat: make textarea placeholder dynamic to endpoint
* style: show user hover buttons only on hover when message is streaming
* fix: ordered list not going past 9, fix css
* feat: add User/AI labels; style: hide loading spinner
* feat: add back custom footer, change original footer text
* feat: dynamic landing icons based on endpoint
* chore: comment out assistants route
* fix: autoScroll to newest on /c/ view
* fix: Export Conversation on new UI
* style: match message style of official more closely
* ci: fix api jest unit tests, comment out e2e tests for now as they will fail until addressed
* feat: more file validation and use blob in preview field, not filepath, to fix temp deletion
* feat: filefilter for multer
* feat: better AI labels based on custom name, model, and endpoint instead of `ChatGPT`
2023-11-21 20:12:48 -05:00
|
|
|
await Message.findOneAndUpdate({ messageId }, update, { upsert: true, new: true });
|
2023-05-18 11:09:31 -07:00
|
|
|
|
2023-05-09 23:51:39 +02:00
|
|
|
return {
|
|
|
|
messageId,
|
|
|
|
conversationId,
|
|
|
|
parentMessageId,
|
|
|
|
sender,
|
|
|
|
text,
|
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532)
* refactor: start new client classes, test localAi support
* feat: create base class, extend chatgpt from base
* refactor(BaseClient.js): change userId parameter to user
refactor(BaseClient.js): change userId parameter to user
feat(OpenAIClient.js): add sendMessage method
refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId
refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId
refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages
feat(index.js): export client classes
refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key
refactor(index.js): comment out askOpenAI route
feat(index.js): add openAI route
feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests.
* refactor(BaseClient.js): use optional chaining operator to access messageId property
refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt
refactor(OpenAIClient.js): use optional chaining operator to access messageId property
refactor(fetch-polyfill.js): remove fetch polyfill
refactor(openAI.js): comment out debug option in clientOptions
* refactor: update import statements and remove unused imports in several files
feat: add getAzureCredentials function to azureUtils module
docs: update comments in azureUtils module
* refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency
* feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials
feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl
feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used
feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods
feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model
feat(openAI.js): modify endpointOption to include modelOptions instead of individual options
refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property
refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify ask method to include endpointOption parameter
* chore: delete draft file
* refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability
* refactor(BaseClient.js): move sendMessage method to BaseClient class
feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class.
* refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
feat(BaseClient.js): add buildMessages method to BaseClient class
fix(ChatGPTClient.js): use message.text instead of message.message
refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody
refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable
refactor(OpenAIClient.js): move setOptions method to the bottom of the class
feat(OpenAIClient.js): add support for cl100k_base encoding
feat(OpenAIClient.js): add support for unofficial chat GPT models
feat(OpenAIClient.js): add support for custom modelOptions
feat(OpenAIClient.js): add caching for tokenizers
feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers
refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
refactor(OpenAIClient.js): rename buildPrompt to buildMessages
refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js
* refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency
- In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'.
- In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method.
* refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages
fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty
* refactor(BaseClient.js): extract addInstructions method from sendMessage method
feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model
refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list
* refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method
* feat(BaseClient.js): add support for token count tracking and context strategy
feat(OpenAIClient.js): add support for token count tracking and context strategy
feat(Message.js): add tokenCount field to Message schema and updateMessage function
* refactor(BaseClient.js): add support for refining messages based on token limit
feat(OpenAIClient.js): add support for context refinement strategy
refactor(OpenAIClient.js): use context refinement strategy in message sending
refactor(server/index.js): improve code readability by breaking long lines
* refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity
feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement
feat(BaseClient.js): add `refineMessages` method to refine messages
feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy
feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options
refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContext
* chore(openAI.js): comment out contextStrategy option in clientOptions
* chore(openAI.js): comment out debug option in clientOptions object
* test: BaseClient tests in progress
* test: Complete OpenAIClient & BaseClient tests
* fix(OpenAIClient.js): remove unnecessary whitespace
fix(OpenAIClient.js): remove unused variables and comments
fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests
* chore(.eslintrc.js): add rule for maximum of 1 empty line
feat(ask/openAI.js): add abortMessage utility function
fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters
feat(utils/index.js): export abortMessage utility function
* test: complete additional tests
* feat: Azure OpenAI as a separate endpoint
* chore: remove extraneous console logs
* fix(azureOpenAI): use chatCompletion endpoint
* chore(initializeClient.js): delete initializeClient.js file
chore(askOpenAI.js): delete old OpenAI route handler
chore(handlers.js): remove trailing whitespace in thought variable assignment
* chore(chatgpt-client.js): remove unused chatgpt-client.js file
refactor(index.js): remove askClient import and export from index.js
* chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance
The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second.
The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000.
A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again.
The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests.
Note: This is a test script and should not be used in production
* feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client
feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client
feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages
fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils
feat(BaseClient): return promptTokens and completionTokens
* refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient
* refactor: client paths
* chore(jest.config.js): remove jest.config.js file
* fix(PluginController.js): update file path to manifest.json
feat(gptPlugins.js): add support for aborting messages
refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency
* fix(BaseClient.js): fix spacing in generateTextStream function signature
refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function
refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function
refactor(PluginsClient.js): remove unused variables and date formatting in constructor
refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function
* refactor(GoogleClient): GoogleClient now extends BaseClient
* chore(.env.example): add AZURE_OPENAI_MODELS variable
fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true
fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true
fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement
docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables
* fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
|
|
|
isCreatedByUser,
|
|
|
|
tokenCount,
|
2023-05-09 23:51:39 +02:00
|
|
|
};
|
|
|
|
} catch (err) {
|
2023-12-14 07:49:27 -05:00
|
|
|
logger.error('Error saving message:', err);
|
2023-05-09 23:51:39 +02:00
|
|
|
throw new Error('Failed to save message.');
|
2023-02-08 00:02:29 -05:00
|
|
|
}
|
2023-02-06 14:05:02 -05:00
|
|
|
},
|
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532)
* refactor: start new client classes, test localAi support
* feat: create base class, extend chatgpt from base
* refactor(BaseClient.js): change userId parameter to user
refactor(BaseClient.js): change userId parameter to user
feat(OpenAIClient.js): add sendMessage method
refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId
refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId
refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages
feat(index.js): export client classes
refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key
refactor(index.js): comment out askOpenAI route
feat(index.js): add openAI route
feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests.
* refactor(BaseClient.js): use optional chaining operator to access messageId property
refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt
refactor(OpenAIClient.js): use optional chaining operator to access messageId property
refactor(fetch-polyfill.js): remove fetch polyfill
refactor(openAI.js): comment out debug option in clientOptions
* refactor: update import statements and remove unused imports in several files
feat: add getAzureCredentials function to azureUtils module
docs: update comments in azureUtils module
* refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency
* feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials
feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl
feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used
feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods
feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model
feat(openAI.js): modify endpointOption to include modelOptions instead of individual options
refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property
refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify ask method to include endpointOption parameter
* chore: delete draft file
* refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability
* refactor(BaseClient.js): move sendMessage method to BaseClient class
feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class.
* refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
feat(BaseClient.js): add buildMessages method to BaseClient class
fix(ChatGPTClient.js): use message.text instead of message.message
refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody
refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable
refactor(OpenAIClient.js): move setOptions method to the bottom of the class
feat(OpenAIClient.js): add support for cl100k_base encoding
feat(OpenAIClient.js): add support for unofficial chat GPT models
feat(OpenAIClient.js): add support for custom modelOptions
feat(OpenAIClient.js): add caching for tokenizers
feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers
refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
refactor(OpenAIClient.js): rename buildPrompt to buildMessages
refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js
* refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency
- In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'.
- In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method.
* refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages
fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty
* refactor(BaseClient.js): extract addInstructions method from sendMessage method
feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model
refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list
* refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method
* feat(BaseClient.js): add support for token count tracking and context strategy
feat(OpenAIClient.js): add support for token count tracking and context strategy
feat(Message.js): add tokenCount field to Message schema and updateMessage function
* refactor(BaseClient.js): add support for refining messages based on token limit
feat(OpenAIClient.js): add support for context refinement strategy
refactor(OpenAIClient.js): use context refinement strategy in message sending
refactor(server/index.js): improve code readability by breaking long lines
* refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity
feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement
feat(BaseClient.js): add `refineMessages` method to refine messages
feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy
feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options
refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContext
* chore(openAI.js): comment out contextStrategy option in clientOptions
* chore(openAI.js): comment out debug option in clientOptions object
* test: BaseClient tests in progress
* test: Complete OpenAIClient & BaseClient tests
* fix(OpenAIClient.js): remove unnecessary whitespace
fix(OpenAIClient.js): remove unused variables and comments
fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests
* chore(.eslintrc.js): add rule for maximum of 1 empty line
feat(ask/openAI.js): add abortMessage utility function
fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters
feat(utils/index.js): export abortMessage utility function
* test: complete additional tests
* feat: Azure OpenAI as a separate endpoint
* chore: remove extraneous console logs
* fix(azureOpenAI): use chatCompletion endpoint
* chore(initializeClient.js): delete initializeClient.js file
chore(askOpenAI.js): delete old OpenAI route handler
chore(handlers.js): remove trailing whitespace in thought variable assignment
* chore(chatgpt-client.js): remove unused chatgpt-client.js file
refactor(index.js): remove askClient import and export from index.js
* chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance
The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second.
The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000.
A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again.
The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests.
Note: This is a test script and should not be used in production
* feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client
feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client
feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages
fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils
feat(BaseClient): return promptTokens and completionTokens
* refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient
* refactor: client paths
* chore(jest.config.js): remove jest.config.js file
* fix(PluginController.js): update file path to manifest.json
feat(gptPlugins.js): add support for aborting messages
refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency
* fix(BaseClient.js): fix spacing in generateTextStream function signature
refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function
refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function
refactor(PluginsClient.js): remove unused variables and date formatting in constructor
refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function
* refactor(GoogleClient): GoogleClient now extends BaseClient
* chore(.env.example): add AZURE_OPENAI_MODELS variable
fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true
fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true
fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement
docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables
* fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
|
|
|
async updateMessage(message) {
|
|
|
|
try {
|
|
|
|
const { messageId, ...update } = message;
|
2023-09-07 06:37:04 -04:00
|
|
|
update.isEdited = true;
|
feat: ChatGPT Plugins/OpenAPI specs for Plugins Endpoint (#620)
* wip: proof of concept for openapi chain
* chore(api): update langchain dependency to version 0.0.105
* feat(Plugins): use ChatGPT Plugins/OpenAPI specs (first pass)
* chore(manifest.json): update pluginKey for "Browser" tool to "web-browser"
chore(handleTools.js): update customConstructor key for "web-browser" tool
* fix(handleSubmit.js): set unfinished property to false for all endpoints
* fix(handlers.js): remove unnecessary capitalizeWords function and use action.tool directly
refactor(endpoints.js): rename availableTools to tools and transform it into a map
* feat(endpoints): add plugins selector to endpoints file
refactor(CodeBlock.tsx): refactor to typescript
refactor(Plugin.tsx): use recoil Map for plugin name and refactor to typescript
chore(Message.jsx): linting
chore(PluginsOptions/index.jsx): remove comment/linting
chore(svg): export Clipboard and CheckMark components from SVG index and refactor to typescript
* fix(OpenAPIPlugin.js): rename readYamlFile function to readSpecFile
fix(OpenAPIPlugin.js): handle JSON files in readSpecFile function
fix(OpenAPIPlugin.js): handle JSON URLs in getSpec function
fix(OpenAPIPlugin.js): handle JSON variables in createOpenAPIPlugin function
fix(OpenAPIPlugin.js): add description for variables in createOpenAPIPlugin function
fix(OpenAPIPlugin.js): add optional flag for is_user_authenticated and has_user_authentication in ManifestDefinition
fix(loadSpecs.js): add optional flag for is_user_authenticated and has_user_authentication in ManifestDefinition
fix(Plugin.tsx): remove unnecessary callback parameter in getPluginName function
fix(getDefaultConversation.js): fix browser console error: handle null value for lastConversationSetup in getDefaultConversation function
* feat(api): add new tools
Add Ai PDF tool for super-fast, interactive chats with PDFs of any size, complete with page references for fact checking.
Add VoxScript tool for searching through YouTube transcripts, financial data sources, Google Search results, and more.
Add WebPilot tool for browsing and QA of webpages, PDFs, and data. Generate articles from one or more URLs.
feat(api): update OpenAPIPlugin.js
- Add support for bearer token authorization in the OpenAPIPlugin.
- Add support for custom headers in the OpenAPIPlugin.
fix(api): fix loadTools.js
- Pass the user parameter to the loadSpecs function.
* feat(PluginsClient.js): import findMessageContent function from utils
feat(PluginsClient.js): add message parameter to options object in initializeCustomAgent function
feat(PluginsClient.js): add content to errorMessage if message content is found
feat(PluginsClient.js): break out of loop if message content is found
feat(PluginsClient.js): add delay option with value of 8 to generateTextStream function
feat(PluginsClient.js): add support for process.env.PORT environment variable in app.listen function
feat(askyourpdf.json): add askyourpdf plugin configuration
feat(metar.json): add metar plugin configuration
feat(askyourpdf.yaml): add askyourpdf plugin OpenAPI specification
feat(OpenAPIPlugin.js): add message parameter to createOpenAPIPlugin function
feat(OpenAPIPlugin.js): add description_for_model to chain run message
feat(addOpenAPISpecs.js): remove verbose option from loadSpecs function call
fix(loadSpecs.js): add 'message' parameter to the loadSpecs function
feat(findMessageContent.js): add utility function to find message content in JSON objects
* fix(PluginStoreDialog.tsx): update z-index value for the dialog container
The z-index value for the dialog container was updated to "102" to ensure it appears above other elements on the page.
* chore(web_pilot.json): add "params" field with "user_has_request" parameter set to true
* chore(eslintrc.js): update eslint rules
fix(Login.tsx): add missing semicolon after import statement
* fix(package-lock.json): update langchain dependency to version ^0.0.105
* fix(OpenAPIPlugin.js): change header key from 'id' to 'librechat_user_id' for consistency and clarity
feat(plugins): add documentation for using official ChatGPT Plugins with OpenAPI specs
This commit adds a new file `chatgpt_plugins_openapi.md` to the `docs/features/plugins` directory. The file provides detailed information on how to use official ChatGPT Plugins with OpenAPI specifications. It explains the components of a plugin, including the Plugin Manifest file and the OpenAPI spec. It also covers the process of adding a plugin, editing manifest files, and customizing OpenAPI spec files. Additionally, the commit includes disclaimers about the limitations and compatibility of plugins with LibreChat. The documentation also clarifies that the use of ChatGPT Plugins with LibreChat does not violate OpenAI's Terms of Service.
The purpose of this commit is to provide comprehensive documentation for developers who want to integrate ChatGPT Plugins into their projects using OpenAPI specs. It aims to guide them through the process of adding and configuring plugins, as well as addressing potential issues and
chore(introduction.md): update link to ChatGPT Plugins documentation
docs(introduction.md): clarify the purpose of the plugins endpoint and its capabilities
* fix(OpenAPIPlugin.js): update SUFFIX variable to provide a clearer description
docs(chatgpt_plugins_openapi.md): update information about adding plugins via url on the frontend
* feat(PluginsClient.js): sendIntermediateMessage on successful Agent load
fix(PluginsClient.js, server/index.js, gptPlugins.js): linting fixes
docs(chatgpt_plugins_openapi.md): update links and add additional information
* Update chatgpt_plugins_openapi.md
* chore: rebuild package-lock file
* chore: format/lint all files with new rules
* chore: format all files
* chore(README.md): update AI model selection list
The AI model selection list in the README.md file has been updated to reflect the current options available. The "Anthropic" model has been added as an alternative name for the "Claude" model.
* fix(Plugin.tsx): type issue
* feat(tools): add new tool WebPilot
feat(tools): remove tool Weather Report
feat(tools): add new tool Prompt Perfect
feat(tools): add new tool Scholarly Graph Link
* feat(OpenAPIPlugin.js): add getSpec and readSpecFile functions
feat(OpenAPIPlugin.spec.js): add tests for readSpecFile, getSpec, and createOpenAPIPlugin functions
* chore(agent-demo-1.js): remove unused code and dependencies
chore(agent-demo-2.js): remove unused code and dependencies
chore(demo.js): remove unused code and dependencies
* feat(addOpenAPISpecs): add function to transform OpenAPI specs into desired format
feat(addOpenAPISpecs.spec): add tests for transformSpec function
fix(loadSpecs): remove debugging code
* feat(loadSpecs.spec.js): add unit tests for ManifestDefinition, validateJson, and loadSpecs functions
* fix: package file resolution bug
* chore: move scholarly_graph_link manifest to 'has-issues'
* refactor(client/hooks): convert to TS and export from index
* Update introduction.md
* Update chatgpt_plugins_openapi.md
2023-07-16 12:19:47 -04:00
|
|
|
const updatedMessage = await Message.findOneAndUpdate({ messageId }, update, { new: true });
|
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532)
* refactor: start new client classes, test localAi support
* feat: create base class, extend chatgpt from base
* refactor(BaseClient.js): change userId parameter to user
refactor(BaseClient.js): change userId parameter to user
feat(OpenAIClient.js): add sendMessage method
refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId
refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId
refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages
feat(index.js): export client classes
refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key
refactor(index.js): comment out askOpenAI route
feat(index.js): add openAI route
feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests.
* refactor(BaseClient.js): use optional chaining operator to access messageId property
refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt
refactor(OpenAIClient.js): use optional chaining operator to access messageId property
refactor(fetch-polyfill.js): remove fetch polyfill
refactor(openAI.js): comment out debug option in clientOptions
* refactor: update import statements and remove unused imports in several files
feat: add getAzureCredentials function to azureUtils module
docs: update comments in azureUtils module
* refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency
* feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials
feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl
feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used
feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods
feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model
feat(openAI.js): modify endpointOption to include modelOptions instead of individual options
refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property
refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify ask method to include endpointOption parameter
* chore: delete draft file
* refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability
* refactor(BaseClient.js): move sendMessage method to BaseClient class
feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class.
* refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
feat(BaseClient.js): add buildMessages method to BaseClient class
fix(ChatGPTClient.js): use message.text instead of message.message
refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody
refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable
refactor(OpenAIClient.js): move setOptions method to the bottom of the class
feat(OpenAIClient.js): add support for cl100k_base encoding
feat(OpenAIClient.js): add support for unofficial chat GPT models
feat(OpenAIClient.js): add support for custom modelOptions
feat(OpenAIClient.js): add caching for tokenizers
feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers
refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
refactor(OpenAIClient.js): rename buildPrompt to buildMessages
refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js
* refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency
- In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'.
- In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method.
* refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages
fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty
* refactor(BaseClient.js): extract addInstructions method from sendMessage method
feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model
refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list
* refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method
* feat(BaseClient.js): add support for token count tracking and context strategy
feat(OpenAIClient.js): add support for token count tracking and context strategy
feat(Message.js): add tokenCount field to Message schema and updateMessage function
* refactor(BaseClient.js): add support for refining messages based on token limit
feat(OpenAIClient.js): add support for context refinement strategy
refactor(OpenAIClient.js): use context refinement strategy in message sending
refactor(server/index.js): improve code readability by breaking long lines
* refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity
feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement
feat(BaseClient.js): add `refineMessages` method to refine messages
feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy
feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options
refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContext
* chore(openAI.js): comment out contextStrategy option in clientOptions
* chore(openAI.js): comment out debug option in clientOptions object
* test: BaseClient tests in progress
* test: Complete OpenAIClient & BaseClient tests
* fix(OpenAIClient.js): remove unnecessary whitespace
fix(OpenAIClient.js): remove unused variables and comments
fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests
* chore(.eslintrc.js): add rule for maximum of 1 empty line
feat(ask/openAI.js): add abortMessage utility function
fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters
feat(utils/index.js): export abortMessage utility function
* test: complete additional tests
* feat: Azure OpenAI as a separate endpoint
* chore: remove extraneous console logs
* fix(azureOpenAI): use chatCompletion endpoint
* chore(initializeClient.js): delete initializeClient.js file
chore(askOpenAI.js): delete old OpenAI route handler
chore(handlers.js): remove trailing whitespace in thought variable assignment
* chore(chatgpt-client.js): remove unused chatgpt-client.js file
refactor(index.js): remove askClient import and export from index.js
* chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance
The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second.
The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000.
A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again.
The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests.
Note: This is a test script and should not be used in production
* feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client
feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client
feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages
fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils
feat(BaseClient): return promptTokens and completionTokens
* refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient
* refactor: client paths
* chore(jest.config.js): remove jest.config.js file
* fix(PluginController.js): update file path to manifest.json
feat(gptPlugins.js): add support for aborting messages
refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency
* fix(BaseClient.js): fix spacing in generateTextStream function signature
refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function
refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function
refactor(PluginsClient.js): remove unused variables and date formatting in constructor
refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function
* refactor(GoogleClient): GoogleClient now extends BaseClient
* chore(.env.example): add AZURE_OPENAI_MODELS variable
fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true
fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true
fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement
docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables
* fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
|
|
|
|
|
|
|
if (!updatedMessage) {
|
|
|
|
throw new Error('Message not found.');
|
|
|
|
}
|
2023-05-18 11:09:31 -07:00
|
|
|
|
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532)
* refactor: start new client classes, test localAi support
* feat: create base class, extend chatgpt from base
* refactor(BaseClient.js): change userId parameter to user
refactor(BaseClient.js): change userId parameter to user
feat(OpenAIClient.js): add sendMessage method
refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId
refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId
refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages
feat(index.js): export client classes
refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key
refactor(index.js): comment out askOpenAI route
feat(index.js): add openAI route
feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests.
* refactor(BaseClient.js): use optional chaining operator to access messageId property
refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt
refactor(OpenAIClient.js): use optional chaining operator to access messageId property
refactor(fetch-polyfill.js): remove fetch polyfill
refactor(openAI.js): comment out debug option in clientOptions
* refactor: update import statements and remove unused imports in several files
feat: add getAzureCredentials function to azureUtils module
docs: update comments in azureUtils module
* refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency
* feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials
feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl
feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used
feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods
feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model
feat(openAI.js): modify endpointOption to include modelOptions instead of individual options
refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property
refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify ask method to include endpointOption parameter
* chore: delete draft file
* refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability
* refactor(BaseClient.js): move sendMessage method to BaseClient class
feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class.
* refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
feat(BaseClient.js): add buildMessages method to BaseClient class
fix(ChatGPTClient.js): use message.text instead of message.message
refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody
refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable
refactor(OpenAIClient.js): move setOptions method to the bottom of the class
feat(OpenAIClient.js): add support for cl100k_base encoding
feat(OpenAIClient.js): add support for unofficial chat GPT models
feat(OpenAIClient.js): add support for custom modelOptions
feat(OpenAIClient.js): add caching for tokenizers
feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers
refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
refactor(OpenAIClient.js): rename buildPrompt to buildMessages
refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js
* refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency
- In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'.
- In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method.
* refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages
fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty
* refactor(BaseClient.js): extract addInstructions method from sendMessage method
feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model
refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list
* refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method
* feat(BaseClient.js): add support for token count tracking and context strategy
feat(OpenAIClient.js): add support for token count tracking and context strategy
feat(Message.js): add tokenCount field to Message schema and updateMessage function
* refactor(BaseClient.js): add support for refining messages based on token limit
feat(OpenAIClient.js): add support for context refinement strategy
refactor(OpenAIClient.js): use context refinement strategy in message sending
refactor(server/index.js): improve code readability by breaking long lines
* refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity
feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement
feat(BaseClient.js): add `refineMessages` method to refine messages
feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy
feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options
refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContext
* chore(openAI.js): comment out contextStrategy option in clientOptions
* chore(openAI.js): comment out debug option in clientOptions object
* test: BaseClient tests in progress
* test: Complete OpenAIClient & BaseClient tests
* fix(OpenAIClient.js): remove unnecessary whitespace
fix(OpenAIClient.js): remove unused variables and comments
fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests
* chore(.eslintrc.js): add rule for maximum of 1 empty line
feat(ask/openAI.js): add abortMessage utility function
fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters
feat(utils/index.js): export abortMessage utility function
* test: complete additional tests
* feat: Azure OpenAI as a separate endpoint
* chore: remove extraneous console logs
* fix(azureOpenAI): use chatCompletion endpoint
* chore(initializeClient.js): delete initializeClient.js file
chore(askOpenAI.js): delete old OpenAI route handler
chore(handlers.js): remove trailing whitespace in thought variable assignment
* chore(chatgpt-client.js): remove unused chatgpt-client.js file
refactor(index.js): remove askClient import and export from index.js
* chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance
The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second.
The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000.
A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again.
The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests.
Note: This is a test script and should not be used in production
* feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client
feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client
feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages
fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils
feat(BaseClient): return promptTokens and completionTokens
* refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient
* refactor: client paths
* chore(jest.config.js): remove jest.config.js file
* fix(PluginController.js): update file path to manifest.json
feat(gptPlugins.js): add support for aborting messages
refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency
* fix(BaseClient.js): fix spacing in generateTextStream function signature
refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function
refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function
refactor(PluginsClient.js): remove unused variables and date formatting in constructor
refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function
* refactor(GoogleClient): GoogleClient now extends BaseClient
* chore(.env.example): add AZURE_OPENAI_MODELS variable
fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true
fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true
fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement
docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables
* fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
|
|
|
return {
|
|
|
|
messageId: updatedMessage.messageId,
|
|
|
|
conversationId: updatedMessage.conversationId,
|
|
|
|
parentMessageId: updatedMessage.parentMessageId,
|
|
|
|
sender: updatedMessage.sender,
|
|
|
|
text: updatedMessage.text,
|
|
|
|
isCreatedByUser: updatedMessage.isCreatedByUser,
|
|
|
|
tokenCount: updatedMessage.tokenCount,
|
2023-09-07 06:37:04 -04:00
|
|
|
isEdited: true,
|
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532)
* refactor: start new client classes, test localAi support
* feat: create base class, extend chatgpt from base
* refactor(BaseClient.js): change userId parameter to user
refactor(BaseClient.js): change userId parameter to user
feat(OpenAIClient.js): add sendMessage method
refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId
refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId
refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages
feat(index.js): export client classes
refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key
refactor(index.js): comment out askOpenAI route
feat(index.js): add openAI route
feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests.
* refactor(BaseClient.js): use optional chaining operator to access messageId property
refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt
refactor(OpenAIClient.js): use optional chaining operator to access messageId property
refactor(fetch-polyfill.js): remove fetch polyfill
refactor(openAI.js): comment out debug option in clientOptions
* refactor: update import statements and remove unused imports in several files
feat: add getAzureCredentials function to azureUtils module
docs: update comments in azureUtils module
* refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency
* feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials
feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl
feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used
feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods
feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model
feat(openAI.js): modify endpointOption to include modelOptions instead of individual options
refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property
refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify ask method to include endpointOption parameter
* chore: delete draft file
* refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability
* refactor(BaseClient.js): move sendMessage method to BaseClient class
feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class.
* refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
feat(BaseClient.js): add buildMessages method to BaseClient class
fix(ChatGPTClient.js): use message.text instead of message.message
refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody
refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable
refactor(OpenAIClient.js): move setOptions method to the bottom of the class
feat(OpenAIClient.js): add support for cl100k_base encoding
feat(OpenAIClient.js): add support for unofficial chat GPT models
feat(OpenAIClient.js): add support for custom modelOptions
feat(OpenAIClient.js): add caching for tokenizers
feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers
refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
refactor(OpenAIClient.js): rename buildPrompt to buildMessages
refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js
* refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency
- In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'.
- In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method.
* refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages
fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty
* refactor(BaseClient.js): extract addInstructions method from sendMessage method
feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model
refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list
* refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method
* feat(BaseClient.js): add support for token count tracking and context strategy
feat(OpenAIClient.js): add support for token count tracking and context strategy
feat(Message.js): add tokenCount field to Message schema and updateMessage function
* refactor(BaseClient.js): add support for refining messages based on token limit
feat(OpenAIClient.js): add support for context refinement strategy
refactor(OpenAIClient.js): use context refinement strategy in message sending
refactor(server/index.js): improve code readability by breaking long lines
* refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity
feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement
feat(BaseClient.js): add `refineMessages` method to refine messages
feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy
feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options
refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContext
* chore(openAI.js): comment out contextStrategy option in clientOptions
* chore(openAI.js): comment out debug option in clientOptions object
* test: BaseClient tests in progress
* test: Complete OpenAIClient & BaseClient tests
* fix(OpenAIClient.js): remove unnecessary whitespace
fix(OpenAIClient.js): remove unused variables and comments
fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests
* chore(.eslintrc.js): add rule for maximum of 1 empty line
feat(ask/openAI.js): add abortMessage utility function
fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters
feat(utils/index.js): export abortMessage utility function
* test: complete additional tests
* feat: Azure OpenAI as a separate endpoint
* chore: remove extraneous console logs
* fix(azureOpenAI): use chatCompletion endpoint
* chore(initializeClient.js): delete initializeClient.js file
chore(askOpenAI.js): delete old OpenAI route handler
chore(handlers.js): remove trailing whitespace in thought variable assignment
* chore(chatgpt-client.js): remove unused chatgpt-client.js file
refactor(index.js): remove askClient import and export from index.js
* chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance
The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second.
The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000.
A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again.
The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests.
Note: This is a test script and should not be used in production
* feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client
feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client
feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages
fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils
feat(BaseClient): return promptTokens and completionTokens
* refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient
* refactor: client paths
* chore(jest.config.js): remove jest.config.js file
* fix(PluginController.js): update file path to manifest.json
feat(gptPlugins.js): add support for aborting messages
refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency
* fix(BaseClient.js): fix spacing in generateTextStream function signature
refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function
refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function
refactor(PluginsClient.js): remove unused variables and date formatting in constructor
refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function
* refactor(GoogleClient): GoogleClient now extends BaseClient
* chore(.env.example): add AZURE_OPENAI_MODELS variable
fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true
fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true
fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement
docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables
* fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
|
|
|
};
|
|
|
|
} catch (err) {
|
2023-12-14 07:49:27 -05:00
|
|
|
logger.error('Error updating message:', err);
|
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532)
* refactor: start new client classes, test localAi support
* feat: create base class, extend chatgpt from base
* refactor(BaseClient.js): change userId parameter to user
refactor(BaseClient.js): change userId parameter to user
feat(OpenAIClient.js): add sendMessage method
refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId
refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId
refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages
feat(index.js): export client classes
refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key
refactor(index.js): comment out askOpenAI route
feat(index.js): add openAI route
feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests.
* refactor(BaseClient.js): use optional chaining operator to access messageId property
refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt
refactor(OpenAIClient.js): use optional chaining operator to access messageId property
refactor(fetch-polyfill.js): remove fetch polyfill
refactor(openAI.js): comment out debug option in clientOptions
* refactor: update import statements and remove unused imports in several files
feat: add getAzureCredentials function to azureUtils module
docs: update comments in azureUtils module
* refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency
* feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials
feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl
feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used
feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods
feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model
feat(openAI.js): modify endpointOption to include modelOptions instead of individual options
refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property
refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify ask method to include endpointOption parameter
* chore: delete draft file
* refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability
* refactor(BaseClient.js): move sendMessage method to BaseClient class
feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class.
* refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
feat(BaseClient.js): add buildMessages method to BaseClient class
fix(ChatGPTClient.js): use message.text instead of message.message
refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody
refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable
refactor(OpenAIClient.js): move setOptions method to the bottom of the class
feat(OpenAIClient.js): add support for cl100k_base encoding
feat(OpenAIClient.js): add support for unofficial chat GPT models
feat(OpenAIClient.js): add support for custom modelOptions
feat(OpenAIClient.js): add caching for tokenizers
feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers
refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
refactor(OpenAIClient.js): rename buildPrompt to buildMessages
refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js
* refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency
- In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'.
- In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method.
* refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages
fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty
* refactor(BaseClient.js): extract addInstructions method from sendMessage method
feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model
refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list
* refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method
* feat(BaseClient.js): add support for token count tracking and context strategy
feat(OpenAIClient.js): add support for token count tracking and context strategy
feat(Message.js): add tokenCount field to Message schema and updateMessage function
* refactor(BaseClient.js): add support for refining messages based on token limit
feat(OpenAIClient.js): add support for context refinement strategy
refactor(OpenAIClient.js): use context refinement strategy in message sending
refactor(server/index.js): improve code readability by breaking long lines
* refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity
feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement
feat(BaseClient.js): add `refineMessages` method to refine messages
feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy
feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options
refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContext
* chore(openAI.js): comment out contextStrategy option in clientOptions
* chore(openAI.js): comment out debug option in clientOptions object
* test: BaseClient tests in progress
* test: Complete OpenAIClient & BaseClient tests
* fix(OpenAIClient.js): remove unnecessary whitespace
fix(OpenAIClient.js): remove unused variables and comments
fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests
* chore(.eslintrc.js): add rule for maximum of 1 empty line
feat(ask/openAI.js): add abortMessage utility function
fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters
feat(utils/index.js): export abortMessage utility function
* test: complete additional tests
* feat: Azure OpenAI as a separate endpoint
* chore: remove extraneous console logs
* fix(azureOpenAI): use chatCompletion endpoint
* chore(initializeClient.js): delete initializeClient.js file
chore(askOpenAI.js): delete old OpenAI route handler
chore(handlers.js): remove trailing whitespace in thought variable assignment
* chore(chatgpt-client.js): remove unused chatgpt-client.js file
refactor(index.js): remove askClient import and export from index.js
* chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance
The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second.
The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000.
A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again.
The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests.
Note: This is a test script and should not be used in production
* feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client
feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client
feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages
fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils
feat(BaseClient): return promptTokens and completionTokens
* refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient
* refactor: client paths
* chore(jest.config.js): remove jest.config.js file
* fix(PluginController.js): update file path to manifest.json
feat(gptPlugins.js): add support for aborting messages
refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency
* fix(BaseClient.js): fix spacing in generateTextStream function signature
refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function
refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function
refactor(PluginsClient.js): remove unused variables and date formatting in constructor
refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function
* refactor(GoogleClient): GoogleClient now extends BaseClient
* chore(.env.example): add AZURE_OPENAI_MODELS variable
fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true
fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true
fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement
docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables
* fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
|
|
|
throw new Error('Failed to update message.');
|
|
|
|
}
|
|
|
|
},
|
2023-05-09 23:51:39 +02:00
|
|
|
async deleteMessagesSince({ messageId, conversationId }) {
|
2023-03-13 05:26:17 +08:00
|
|
|
try {
|
2023-07-25 19:27:55 -04:00
|
|
|
const message = await Message.findOne({ messageId }).lean();
|
2023-03-13 05:26:17 +08:00
|
|
|
|
2023-05-09 23:51:39 +02:00
|
|
|
if (message) {
|
2023-07-25 19:27:55 -04:00
|
|
|
return await Message.find({ conversationId }).deleteMany({
|
|
|
|
createdAt: { $gt: message.createdAt },
|
|
|
|
});
|
2023-05-09 23:51:39 +02:00
|
|
|
}
|
|
|
|
} catch (err) {
|
2023-12-14 07:49:27 -05:00
|
|
|
logger.error('Error deleting messages:', err);
|
2023-05-09 23:51:39 +02:00
|
|
|
throw new Error('Failed to delete messages.');
|
2023-03-13 05:26:17 +08:00
|
|
|
}
|
|
|
|
},
|
2023-05-18 11:09:31 -07:00
|
|
|
|
2023-05-09 23:51:39 +02:00
|
|
|
async getMessages(filter) {
|
2023-02-08 15:26:42 -05:00
|
|
|
try {
|
2023-07-25 19:27:55 -04:00
|
|
|
return await Message.find(filter).sort({ createdAt: 1 }).lean();
|
2023-05-09 23:51:39 +02:00
|
|
|
} catch (err) {
|
2023-12-14 07:49:27 -05:00
|
|
|
logger.error('Error getting messages:', err);
|
2023-05-09 23:51:39 +02:00
|
|
|
throw new Error('Failed to get messages.');
|
2023-02-08 15:26:42 -05:00
|
|
|
}
|
|
|
|
},
|
2023-05-18 11:09:31 -07:00
|
|
|
|
2023-05-09 23:51:39 +02:00
|
|
|
async deleteMessages(filter) {
|
2023-02-08 15:26:42 -05:00
|
|
|
try {
|
2023-07-25 19:27:55 -04:00
|
|
|
return await Message.deleteMany(filter);
|
2023-05-09 23:51:39 +02:00
|
|
|
} catch (err) {
|
2023-12-14 07:49:27 -05:00
|
|
|
logger.error('Error deleting messages:', err);
|
2023-05-09 23:51:39 +02:00
|
|
|
throw new Error('Failed to delete messages.');
|
2023-02-08 15:26:42 -05:00
|
|
|
}
|
2023-07-14 09:36:49 -04:00
|
|
|
},
|
2023-04-04 01:10:50 +08:00
|
|
|
};
|