refactor: Encrypt & Expire User Provided Keys, feat: Rate Limiting (#874)
* docs: make_your_own.md formatting fix for mkdocs
* feat: add express-mongo-sanitize
feat: add login/registration rate limiting
* chore: remove unnecessary console log
* wip: remove token handling from localStorage to encrypted DB solution
* refactor: minor change to UserService
* fix mongo query and add keys route to server
* fix backend controllers and simplify schema/crud
* refactor: rename token to key to separate from access/refresh tokens, setTokenDialog -> setKeyDialog
* refactor(schemas): TEndpointOption token -> key
* refactor(api): use new encrypted key retrieval system
* fix(SetKeyDialog): fix key prop error
* fix(abortMiddleware): pass random UUID if messageId is not generated yet for proper error display on frontend
* fix(getUserKey): wrong prop passed in arg, adds error handling
* fix: prevent message without conversationId from saving to DB, prevents branching on the frontend to a new top-level branch
* refactor: change wording of multiple display messages
* refactor(checkExpiry -> checkUserKeyExpiry): move to UserService file
* fix: type imports from common
* refactor(SubmitButton): convert to TS
* refactor(key.ts): change localStorage map key name
* refactor: add new custom tailwind classes to better match openAI colors
* chore: remove unnecessary warning and catch ScreenShot error
* refactor: move userKey frontend logic to hooks and remove use of localStorage and instead query the DB
* refactor: invalidate correct query key, memoize userKey hook, conditionally render SetKeyDialog to avoid unnecessary calls, refactor SubmitButton props and useEffect for showing 'provide key first'
* fix(SetKeyDialog): use enum-like object for expiry values
feat(Dropdown): add optionsClassName to dynamically change dropdown options container classes
* fix: handle edge case where user had provided a key but the server changes to env variable for keys
* refactor(OpenAI/titleConvo): move titling to client to retain authorized credentials in message lifecycle for titling
* fix(azure): handle user_provided keys correctly for azure
* feat: send user Id to OpenAI to differentiate users in completion requests
* refactor(OpenAI/titleConvo): adding tokens helps minimize LLM from using the language in title response
* feat: add delete endpoint for keys
* chore: remove throttling of title
* feat: add 'Data controls' to Settings, add 'Revoke' keys feature in Key Dialog and Data controls
* refactor: reorganize PluginsClient files in langchain format
* feat: use langchain for titling convos
* chore: cleanup titling convo, with fallback to original method, escape braces, use only snippet for language detection
* refactor: move helper functions to appropriate langchain folders for reusability
* fix: userProvidesKey handling for gptPlugins
* fix: frontend handling of plugins key
* chore: cleanup logging and ts-ignore SSE
* fix: forwardRef misuse in DangerButton
* fix(GoogleConfig/FileUpload): localize errors and simplify validation with zod
* fix: cleanup google logging and fix user provided key handling
* chore: remove titling from google
* chore: removing logging from browser endpoint
* wip: fix menu flicker
* feat: useLocalStorage hook
* feat: add Tooltip for UI
* refactor(EndpointMenu): utilize Tooltip and useLocalStorage, remove old 'New Chat' slide-over
* fix(e2e): use testId for endpoint menu trigger
* chore: final touches to EndpointMenu before future refactor to declutter component
* refactor(localization): change select endpoint to open menu and add translations
* chore: add final prop to error message response
* ci: minor edits to facilitate testing
* ci: new e2e test which tests for new key setting/revoking features
2023-09-06 10:46:27 -04:00
|
|
|
const { z } = require('zod');
|
2025-10-01 23:30:47 -04:00
|
|
|
const { logger } = require('@librechat/data-schemas');
|
feat: ConversationSummaryBufferMemory (#973)
* refactor: pass model in message edit payload, use encoder in standalone util function
* feat: add summaryBuffer helper
* refactor(api/messages): use new countTokens helper and add auth middleware at top
* wip: ConversationSummaryBufferMemory
* refactor: move pre-generation helpers to prompts dir
* chore: remove console log
* chore: remove test as payload will no longer carry tokenCount
* chore: update getMessagesWithinTokenLimit JSDoc
* refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests
* refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message
* chore: add newer model to token map
* fix: condition was point to prop of array instead of message prop
* refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary
refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present
fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present
* chore: log previous_summary if debugging
* refactor(formatMessage): assume if role is defined that it's a valid value
* refactor(getMessagesWithinTokenLimit): remove summary logic
refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned
refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary
refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method
refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic
* fix: undefined handling and summarizing only when shouldRefineContext is true
* chore(BaseClient): fix test results omitting system role for summaries and test edge case
* chore: export summaryBuffer from index file
* refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer
* feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine'
* refactor: rename refineMessages method to summarizeMessages for clarity
* chore: clarify summary future intent in .env.example
* refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed
* feat(gptPlugins): enable summarization for plugins
* refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner
* refactor(agents): use ConversationSummaryBufferMemory for both agent types
* refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests
* refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers
* fix: forgot to spread formatMessages also took opportunity to pluralize filename
* refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing
* ci(formatMessages): add more exhaustive checks for langchain messages
* feat: add debug env var for OpenAI
* chore: delete unnecessary comments
* chore: add extra note about summary feature
* fix: remove tokenCount from payload instructions
* fix: test fail
* fix: only pass instructions to payload when defined or not empty object
* refactor: fromPromptMessages is deprecated, use renamed method fromMessages
* refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility
* fix(PluginsClient.buildPromptBody): handle undefined message strings
* chore: log langchain titling error
* feat: getModelMaxTokens helper
* feat: tokenSplit helper
* feat: summary prompts updated
* fix: optimize _CUT_OFF_SUMMARIZER prompt
* refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context
* fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context,
refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this
refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning
* fix(handleContextStrategy): handle case where incoming prompt is bigger than model context
* chore: rename refinedContent to splitText
* chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
|
|
|
const { langPrompt, createTitlePrompt, escapeBraces, getSnippet } = require('../prompts');
|
refactor: Encrypt & Expire User Provided Keys, feat: Rate Limiting (#874)
* docs: make_your_own.md formatting fix for mkdocs
* feat: add express-mongo-sanitize
feat: add login/registration rate limiting
* chore: remove unnecessary console log
* wip: remove token handling from localStorage to encrypted DB solution
* refactor: minor change to UserService
* fix mongo query and add keys route to server
* fix backend controllers and simplify schema/crud
* refactor: rename token to key to separate from access/refresh tokens, setTokenDialog -> setKeyDialog
* refactor(schemas): TEndpointOption token -> key
* refactor(api): use new encrypted key retrieval system
* fix(SetKeyDialog): fix key prop error
* fix(abortMiddleware): pass random UUID if messageId is not generated yet for proper error display on frontend
* fix(getUserKey): wrong prop passed in arg, adds error handling
* fix: prevent message without conversationId from saving to DB, prevents branching on the frontend to a new top-level branch
* refactor: change wording of multiple display messages
* refactor(checkExpiry -> checkUserKeyExpiry): move to UserService file
* fix: type imports from common
* refactor(SubmitButton): convert to TS
* refactor(key.ts): change localStorage map key name
* refactor: add new custom tailwind classes to better match openAI colors
* chore: remove unnecessary warning and catch ScreenShot error
* refactor: move userKey frontend logic to hooks and remove use of localStorage and instead query the DB
* refactor: invalidate correct query key, memoize userKey hook, conditionally render SetKeyDialog to avoid unnecessary calls, refactor SubmitButton props and useEffect for showing 'provide key first'
* fix(SetKeyDialog): use enum-like object for expiry values
feat(Dropdown): add optionsClassName to dynamically change dropdown options container classes
* fix: handle edge case where user had provided a key but the server changes to env variable for keys
* refactor(OpenAI/titleConvo): move titling to client to retain authorized credentials in message lifecycle for titling
* fix(azure): handle user_provided keys correctly for azure
* feat: send user Id to OpenAI to differentiate users in completion requests
* refactor(OpenAI/titleConvo): adding tokens helps minimize LLM from using the language in title response
* feat: add delete endpoint for keys
* chore: remove throttling of title
* feat: add 'Data controls' to Settings, add 'Revoke' keys feature in Key Dialog and Data controls
* refactor: reorganize PluginsClient files in langchain format
* feat: use langchain for titling convos
* chore: cleanup titling convo, with fallback to original method, escape braces, use only snippet for language detection
* refactor: move helper functions to appropriate langchain folders for reusability
* fix: userProvidesKey handling for gptPlugins
* fix: frontend handling of plugins key
* chore: cleanup logging and ts-ignore SSE
* fix: forwardRef misuse in DangerButton
* fix(GoogleConfig/FileUpload): localize errors and simplify validation with zod
* fix: cleanup google logging and fix user provided key handling
* chore: remove titling from google
* chore: removing logging from browser endpoint
* wip: fix menu flicker
* feat: useLocalStorage hook
* feat: add Tooltip for UI
* refactor(EndpointMenu): utilize Tooltip and useLocalStorage, remove old 'New Chat' slide-over
* fix(e2e): use testId for endpoint menu trigger
* chore: final touches to EndpointMenu before future refactor to declutter component
* refactor(localization): change select endpoint to open menu and add translations
* chore: add final prop to error message response
* ci: minor edits to facilitate testing
* ci: new e2e test which tests for new key setting/revoking features
2023-09-06 10:46:27 -04:00
|
|
|
const { createStructuredOutputChainFromZod } = require('langchain/chains/openai_functions');
|
|
|
|
|
|
|
|
|
|
const langSchema = z.object({
|
|
|
|
|
language: z.string().describe('The language of the input text (full noun, no abbreviations).'),
|
|
|
|
|
});
|
|
|
|
|
|
feat: Accurate Token Usage Tracking & Optional Balance (#1018)
* refactor(Chains/llms): allow passing callbacks
* refactor(BaseClient): accurately count completion tokens as generation only
* refactor(OpenAIClient): remove unused getTokenCountForResponse, pass streaming var and callbacks in initializeLLM
* wip: summary prompt tokens
* refactor(summarizeMessages): new cut-off strategy that generates a better summary by adding context from beginning, truncating the middle, and providing the end
wip: draft out relevant providers and variables for token tracing
* refactor(createLLM): make streaming prop false by default
* chore: remove use of getTokenCountForResponse
* refactor(agents): use BufferMemory as ConversationSummaryBufferMemory token usage not easy to trace
* chore: remove passing of streaming prop, also console log useful vars for tracing
* feat: formatFromLangChain helper function to count tokens for ChatModelStart
* refactor(initializeLLM): add role for LLM tracing
* chore(formatFromLangChain): update JSDoc
* feat(formatMessages): formats langChain messages into OpenAI payload format
* chore: install openai-chat-tokens
* refactor(formatMessage): optimize conditional langChain logic
fix(formatFromLangChain): fix destructuring
* feat: accurate prompt tokens for ChatModelStart before generation
* refactor(handleChatModelStart): move to callbacks dir, use factory function
* refactor(initializeLLM): rename 'role' to 'context'
* feat(Balance/Transaction): new schema/models for tracking token spend
refactor(Key): factor out model export to separate file
* refactor(initializeClient): add req,res objects to client options
* feat: add-balance script to add to an existing users' token balance
refactor(Transaction): use multiplier map/function, return balance update
* refactor(Tx): update enum for tokenType, return 1 for multiplier if no map match
* refactor(Tx): add fair fallback value multiplier incase the config result is undefined
* refactor(Balance): rename 'tokens' to 'tokenCredits'
* feat: balance check, add tx.js for new tx-related methods and tests
* chore(summaryPrompts): update prompt token count
* refactor(callbacks): pass req, res
wip: check balance
* refactor(Tx): make convoId a String type, fix(calculateTokenValue)
* refactor(BaseClient): add conversationId as client prop when assigned
* feat(RunManager): track LLM runs with manager, track token spend from LLM,
refactor(OpenAIClient): use RunManager to create callbacks, pass user prop to langchain api calls
* feat(spendTokens): helper to spend prompt/completion tokens
* feat(checkBalance): add helper to check, log, deny request if balance doesn't have enough funds
refactor(Balance): static check method to return object instead of boolean now
wip(OpenAIClient): implement use of checkBalance
* refactor(initializeLLM): add token buffer to assure summary isn't generated when subsequent payload is too large
refactor(OpenAIClient): add checkBalance
refactor(createStartHandler): add checkBalance
* chore: remove prompt and completion token logging from route handler
* chore(spendTokens): add JSDoc
* feat(logTokenCost): record transactions for basic api calls
* chore(ask/edit): invoke getResponseSender only once per API call
* refactor(ask/edit): pass promptTokens to getIds and include in abort data
* refactor(getIds -> getReqData): rename function
* refactor(Tx): increase value if incomplete message
* feat: record tokenUsage when message is aborted
* refactor: subtract tokens when payload includes function_call
* refactor: add namespace for token_balance
* fix(spendTokens): only execute if corresponding token type amounts are defined
* refactor(checkBalance): throws Error if not enough token credits
* refactor(runTitleChain): pass and use signal, spread object props in create helpers, and use 'call' instead of 'run'
* fix(abortMiddleware): circular dependency, and default to empty string for completionTokens
* fix: properly cancel title requests when there isn't enough tokens to generate
* feat(predictNewSummary): custom chain for summaries to allow signal passing
refactor(summaryBuffer): use new custom chain
* feat(RunManager): add getRunByConversationId method, refactor: remove run and throw llm error on handleLLMError
* refactor(createStartHandler): if summary, add error details to runs
* fix(OpenAIClient): support aborting from summarization & showing error to user
refactor(summarizeMessages): remove unnecessary operations counting summaryPromptTokens and note for alternative, pass signal to summaryBuffer
* refactor(logTokenCost -> recordTokenUsage): rename
* refactor(checkBalance): include promptTokens in errorMessage
* refactor(checkBalance/spendTokens): move to models dir
* fix(createLanguageChain): correctly pass config
* refactor(initializeLLM/title): add tokenBuffer of 150 for balance check
* refactor(openAPIPlugin): pass signal and memory, filter functions by the one being called
* refactor(createStartHandler): add error to run if context is plugins as well
* refactor(RunManager/handleLLMError): throw error immediately if plugins, don't remove run
* refactor(PluginsClient): pass memory and signal to tools, cleanup error handling logic
* chore: use absolute equality for addTitle condition
* refactor(checkBalance): move checkBalance to execute after userMessage and tokenCounts are saved, also make conditional
* style: icon changes to match official
* fix(BaseClient): getTokenCountForResponse -> getTokenCount
* fix(formatLangChainMessages): add kwargs as fallback prop from lc_kwargs, update JSDoc
* refactor(Tx.create): does not update balance if CHECK_BALANCE is not enabled
* fix(e2e/cleanUp): cleanup new collections, import all model methods from index
* fix(config/add-balance): add uncaughtException listener
* fix: circular dependency
* refactor(initializeLLM/checkBalance): append new generations to errorMessage if cost exceeds balance
* fix(handleResponseMessage): only record token usage in this method if not error and completion is not skipped
* fix(createStartHandler): correct condition for generations
* chore: bump postcss due to moderate severity vulnerability
* chore: bump zod due to low severity vulnerability
* chore: bump openai & data-provider version
* feat(types): OpenAI Message types
* chore: update bun lockfile
* refactor(CodeBlock): add error block formatting
* refactor(utils/Plugin): factor out formatJSON and cn to separate files (json.ts and cn.ts), add extractJSON
* chore(logViolation): delete user_id after error is logged
* refactor(getMessageError -> Error): change to React.FC, add token_balance handling, use extractJSON to determine JSON instead of regex
* fix(DALL-E): use latest openai SDK
* chore: reorganize imports, fix type issue
* feat(server): add balance route
* fix(api/models): add auth
* feat(data-provider): /api/balance query
* feat: show balance if checking is enabled, refetch on final message or error
* chore: update docs, .env.example with token_usage info, add balance script command
* fix(Balance): fallback to empty obj for balance query
* style: slight adjustment of balance element
* docs(token_usage): add PR notes
2023-10-05 18:34:10 -04:00
|
|
|
const createLanguageChain = (config) =>
|
refactor: Encrypt & Expire User Provided Keys, feat: Rate Limiting (#874)
* docs: make_your_own.md formatting fix for mkdocs
* feat: add express-mongo-sanitize
feat: add login/registration rate limiting
* chore: remove unnecessary console log
* wip: remove token handling from localStorage to encrypted DB solution
* refactor: minor change to UserService
* fix mongo query and add keys route to server
* fix backend controllers and simplify schema/crud
* refactor: rename token to key to separate from access/refresh tokens, setTokenDialog -> setKeyDialog
* refactor(schemas): TEndpointOption token -> key
* refactor(api): use new encrypted key retrieval system
* fix(SetKeyDialog): fix key prop error
* fix(abortMiddleware): pass random UUID if messageId is not generated yet for proper error display on frontend
* fix(getUserKey): wrong prop passed in arg, adds error handling
* fix: prevent message without conversationId from saving to DB, prevents branching on the frontend to a new top-level branch
* refactor: change wording of multiple display messages
* refactor(checkExpiry -> checkUserKeyExpiry): move to UserService file
* fix: type imports from common
* refactor(SubmitButton): convert to TS
* refactor(key.ts): change localStorage map key name
* refactor: add new custom tailwind classes to better match openAI colors
* chore: remove unnecessary warning and catch ScreenShot error
* refactor: move userKey frontend logic to hooks and remove use of localStorage and instead query the DB
* refactor: invalidate correct query key, memoize userKey hook, conditionally render SetKeyDialog to avoid unnecessary calls, refactor SubmitButton props and useEffect for showing 'provide key first'
* fix(SetKeyDialog): use enum-like object for expiry values
feat(Dropdown): add optionsClassName to dynamically change dropdown options container classes
* fix: handle edge case where user had provided a key but the server changes to env variable for keys
* refactor(OpenAI/titleConvo): move titling to client to retain authorized credentials in message lifecycle for titling
* fix(azure): handle user_provided keys correctly for azure
* feat: send user Id to OpenAI to differentiate users in completion requests
* refactor(OpenAI/titleConvo): adding tokens helps minimize LLM from using the language in title response
* feat: add delete endpoint for keys
* chore: remove throttling of title
* feat: add 'Data controls' to Settings, add 'Revoke' keys feature in Key Dialog and Data controls
* refactor: reorganize PluginsClient files in langchain format
* feat: use langchain for titling convos
* chore: cleanup titling convo, with fallback to original method, escape braces, use only snippet for language detection
* refactor: move helper functions to appropriate langchain folders for reusability
* fix: userProvidesKey handling for gptPlugins
* fix: frontend handling of plugins key
* chore: cleanup logging and ts-ignore SSE
* fix: forwardRef misuse in DangerButton
* fix(GoogleConfig/FileUpload): localize errors and simplify validation with zod
* fix: cleanup google logging and fix user provided key handling
* chore: remove titling from google
* chore: removing logging from browser endpoint
* wip: fix menu flicker
* feat: useLocalStorage hook
* feat: add Tooltip for UI
* refactor(EndpointMenu): utilize Tooltip and useLocalStorage, remove old 'New Chat' slide-over
* fix(e2e): use testId for endpoint menu trigger
* chore: final touches to EndpointMenu before future refactor to declutter component
* refactor(localization): change select endpoint to open menu and add translations
* chore: add final prop to error message response
* ci: minor edits to facilitate testing
* ci: new e2e test which tests for new key setting/revoking features
2023-09-06 10:46:27 -04:00
|
|
|
createStructuredOutputChainFromZod(langSchema, {
|
|
|
|
|
prompt: langPrompt,
|
feat: Accurate Token Usage Tracking & Optional Balance (#1018)
* refactor(Chains/llms): allow passing callbacks
* refactor(BaseClient): accurately count completion tokens as generation only
* refactor(OpenAIClient): remove unused getTokenCountForResponse, pass streaming var and callbacks in initializeLLM
* wip: summary prompt tokens
* refactor(summarizeMessages): new cut-off strategy that generates a better summary by adding context from beginning, truncating the middle, and providing the end
wip: draft out relevant providers and variables for token tracing
* refactor(createLLM): make streaming prop false by default
* chore: remove use of getTokenCountForResponse
* refactor(agents): use BufferMemory as ConversationSummaryBufferMemory token usage not easy to trace
* chore: remove passing of streaming prop, also console log useful vars for tracing
* feat: formatFromLangChain helper function to count tokens for ChatModelStart
* refactor(initializeLLM): add role for LLM tracing
* chore(formatFromLangChain): update JSDoc
* feat(formatMessages): formats langChain messages into OpenAI payload format
* chore: install openai-chat-tokens
* refactor(formatMessage): optimize conditional langChain logic
fix(formatFromLangChain): fix destructuring
* feat: accurate prompt tokens for ChatModelStart before generation
* refactor(handleChatModelStart): move to callbacks dir, use factory function
* refactor(initializeLLM): rename 'role' to 'context'
* feat(Balance/Transaction): new schema/models for tracking token spend
refactor(Key): factor out model export to separate file
* refactor(initializeClient): add req,res objects to client options
* feat: add-balance script to add to an existing users' token balance
refactor(Transaction): use multiplier map/function, return balance update
* refactor(Tx): update enum for tokenType, return 1 for multiplier if no map match
* refactor(Tx): add fair fallback value multiplier incase the config result is undefined
* refactor(Balance): rename 'tokens' to 'tokenCredits'
* feat: balance check, add tx.js for new tx-related methods and tests
* chore(summaryPrompts): update prompt token count
* refactor(callbacks): pass req, res
wip: check balance
* refactor(Tx): make convoId a String type, fix(calculateTokenValue)
* refactor(BaseClient): add conversationId as client prop when assigned
* feat(RunManager): track LLM runs with manager, track token spend from LLM,
refactor(OpenAIClient): use RunManager to create callbacks, pass user prop to langchain api calls
* feat(spendTokens): helper to spend prompt/completion tokens
* feat(checkBalance): add helper to check, log, deny request if balance doesn't have enough funds
refactor(Balance): static check method to return object instead of boolean now
wip(OpenAIClient): implement use of checkBalance
* refactor(initializeLLM): add token buffer to assure summary isn't generated when subsequent payload is too large
refactor(OpenAIClient): add checkBalance
refactor(createStartHandler): add checkBalance
* chore: remove prompt and completion token logging from route handler
* chore(spendTokens): add JSDoc
* feat(logTokenCost): record transactions for basic api calls
* chore(ask/edit): invoke getResponseSender only once per API call
* refactor(ask/edit): pass promptTokens to getIds and include in abort data
* refactor(getIds -> getReqData): rename function
* refactor(Tx): increase value if incomplete message
* feat: record tokenUsage when message is aborted
* refactor: subtract tokens when payload includes function_call
* refactor: add namespace for token_balance
* fix(spendTokens): only execute if corresponding token type amounts are defined
* refactor(checkBalance): throws Error if not enough token credits
* refactor(runTitleChain): pass and use signal, spread object props in create helpers, and use 'call' instead of 'run'
* fix(abortMiddleware): circular dependency, and default to empty string for completionTokens
* fix: properly cancel title requests when there isn't enough tokens to generate
* feat(predictNewSummary): custom chain for summaries to allow signal passing
refactor(summaryBuffer): use new custom chain
* feat(RunManager): add getRunByConversationId method, refactor: remove run and throw llm error on handleLLMError
* refactor(createStartHandler): if summary, add error details to runs
* fix(OpenAIClient): support aborting from summarization & showing error to user
refactor(summarizeMessages): remove unnecessary operations counting summaryPromptTokens and note for alternative, pass signal to summaryBuffer
* refactor(logTokenCost -> recordTokenUsage): rename
* refactor(checkBalance): include promptTokens in errorMessage
* refactor(checkBalance/spendTokens): move to models dir
* fix(createLanguageChain): correctly pass config
* refactor(initializeLLM/title): add tokenBuffer of 150 for balance check
* refactor(openAPIPlugin): pass signal and memory, filter functions by the one being called
* refactor(createStartHandler): add error to run if context is plugins as well
* refactor(RunManager/handleLLMError): throw error immediately if plugins, don't remove run
* refactor(PluginsClient): pass memory and signal to tools, cleanup error handling logic
* chore: use absolute equality for addTitle condition
* refactor(checkBalance): move checkBalance to execute after userMessage and tokenCounts are saved, also make conditional
* style: icon changes to match official
* fix(BaseClient): getTokenCountForResponse -> getTokenCount
* fix(formatLangChainMessages): add kwargs as fallback prop from lc_kwargs, update JSDoc
* refactor(Tx.create): does not update balance if CHECK_BALANCE is not enabled
* fix(e2e/cleanUp): cleanup new collections, import all model methods from index
* fix(config/add-balance): add uncaughtException listener
* fix: circular dependency
* refactor(initializeLLM/checkBalance): append new generations to errorMessage if cost exceeds balance
* fix(handleResponseMessage): only record token usage in this method if not error and completion is not skipped
* fix(createStartHandler): correct condition for generations
* chore: bump postcss due to moderate severity vulnerability
* chore: bump zod due to low severity vulnerability
* chore: bump openai & data-provider version
* feat(types): OpenAI Message types
* chore: update bun lockfile
* refactor(CodeBlock): add error block formatting
* refactor(utils/Plugin): factor out formatJSON and cn to separate files (json.ts and cn.ts), add extractJSON
* chore(logViolation): delete user_id after error is logged
* refactor(getMessageError -> Error): change to React.FC, add token_balance handling, use extractJSON to determine JSON instead of regex
* fix(DALL-E): use latest openai SDK
* chore: reorganize imports, fix type issue
* feat(server): add balance route
* fix(api/models): add auth
* feat(data-provider): /api/balance query
* feat: show balance if checking is enabled, refetch on final message or error
* chore: update docs, .env.example with token_usage info, add balance script command
* fix(Balance): fallback to empty obj for balance query
* style: slight adjustment of balance element
* docs(token_usage): add PR notes
2023-10-05 18:34:10 -04:00
|
|
|
...config,
|
refactor: Encrypt & Expire User Provided Keys, feat: Rate Limiting (#874)
* docs: make_your_own.md formatting fix for mkdocs
* feat: add express-mongo-sanitize
feat: add login/registration rate limiting
* chore: remove unnecessary console log
* wip: remove token handling from localStorage to encrypted DB solution
* refactor: minor change to UserService
* fix mongo query and add keys route to server
* fix backend controllers and simplify schema/crud
* refactor: rename token to key to separate from access/refresh tokens, setTokenDialog -> setKeyDialog
* refactor(schemas): TEndpointOption token -> key
* refactor(api): use new encrypted key retrieval system
* fix(SetKeyDialog): fix key prop error
* fix(abortMiddleware): pass random UUID if messageId is not generated yet for proper error display on frontend
* fix(getUserKey): wrong prop passed in arg, adds error handling
* fix: prevent message without conversationId from saving to DB, prevents branching on the frontend to a new top-level branch
* refactor: change wording of multiple display messages
* refactor(checkExpiry -> checkUserKeyExpiry): move to UserService file
* fix: type imports from common
* refactor(SubmitButton): convert to TS
* refactor(key.ts): change localStorage map key name
* refactor: add new custom tailwind classes to better match openAI colors
* chore: remove unnecessary warning and catch ScreenShot error
* refactor: move userKey frontend logic to hooks and remove use of localStorage and instead query the DB
* refactor: invalidate correct query key, memoize userKey hook, conditionally render SetKeyDialog to avoid unnecessary calls, refactor SubmitButton props and useEffect for showing 'provide key first'
* fix(SetKeyDialog): use enum-like object for expiry values
feat(Dropdown): add optionsClassName to dynamically change dropdown options container classes
* fix: handle edge case where user had provided a key but the server changes to env variable for keys
* refactor(OpenAI/titleConvo): move titling to client to retain authorized credentials in message lifecycle for titling
* fix(azure): handle user_provided keys correctly for azure
* feat: send user Id to OpenAI to differentiate users in completion requests
* refactor(OpenAI/titleConvo): adding tokens helps minimize LLM from using the language in title response
* feat: add delete endpoint for keys
* chore: remove throttling of title
* feat: add 'Data controls' to Settings, add 'Revoke' keys feature in Key Dialog and Data controls
* refactor: reorganize PluginsClient files in langchain format
* feat: use langchain for titling convos
* chore: cleanup titling convo, with fallback to original method, escape braces, use only snippet for language detection
* refactor: move helper functions to appropriate langchain folders for reusability
* fix: userProvidesKey handling for gptPlugins
* fix: frontend handling of plugins key
* chore: cleanup logging and ts-ignore SSE
* fix: forwardRef misuse in DangerButton
* fix(GoogleConfig/FileUpload): localize errors and simplify validation with zod
* fix: cleanup google logging and fix user provided key handling
* chore: remove titling from google
* chore: removing logging from browser endpoint
* wip: fix menu flicker
* feat: useLocalStorage hook
* feat: add Tooltip for UI
* refactor(EndpointMenu): utilize Tooltip and useLocalStorage, remove old 'New Chat' slide-over
* fix(e2e): use testId for endpoint menu trigger
* chore: final touches to EndpointMenu before future refactor to declutter component
* refactor(localization): change select endpoint to open menu and add translations
* chore: add final prop to error message response
* ci: minor edits to facilitate testing
* ci: new e2e test which tests for new key setting/revoking features
2023-09-06 10:46:27 -04:00
|
|
|
// verbose: true,
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
const titleSchema = z.object({
|
2023-09-20 18:45:56 -04:00
|
|
|
title: z.string().describe('The conversation title in title-case, in the given language.'),
|
refactor: Encrypt & Expire User Provided Keys, feat: Rate Limiting (#874)
* docs: make_your_own.md formatting fix for mkdocs
* feat: add express-mongo-sanitize
feat: add login/registration rate limiting
* chore: remove unnecessary console log
* wip: remove token handling from localStorage to encrypted DB solution
* refactor: minor change to UserService
* fix mongo query and add keys route to server
* fix backend controllers and simplify schema/crud
* refactor: rename token to key to separate from access/refresh tokens, setTokenDialog -> setKeyDialog
* refactor(schemas): TEndpointOption token -> key
* refactor(api): use new encrypted key retrieval system
* fix(SetKeyDialog): fix key prop error
* fix(abortMiddleware): pass random UUID if messageId is not generated yet for proper error display on frontend
* fix(getUserKey): wrong prop passed in arg, adds error handling
* fix: prevent message without conversationId from saving to DB, prevents branching on the frontend to a new top-level branch
* refactor: change wording of multiple display messages
* refactor(checkExpiry -> checkUserKeyExpiry): move to UserService file
* fix: type imports from common
* refactor(SubmitButton): convert to TS
* refactor(key.ts): change localStorage map key name
* refactor: add new custom tailwind classes to better match openAI colors
* chore: remove unnecessary warning and catch ScreenShot error
* refactor: move userKey frontend logic to hooks and remove use of localStorage and instead query the DB
* refactor: invalidate correct query key, memoize userKey hook, conditionally render SetKeyDialog to avoid unnecessary calls, refactor SubmitButton props and useEffect for showing 'provide key first'
* fix(SetKeyDialog): use enum-like object for expiry values
feat(Dropdown): add optionsClassName to dynamically change dropdown options container classes
* fix: handle edge case where user had provided a key but the server changes to env variable for keys
* refactor(OpenAI/titleConvo): move titling to client to retain authorized credentials in message lifecycle for titling
* fix(azure): handle user_provided keys correctly for azure
* feat: send user Id to OpenAI to differentiate users in completion requests
* refactor(OpenAI/titleConvo): adding tokens helps minimize LLM from using the language in title response
* feat: add delete endpoint for keys
* chore: remove throttling of title
* feat: add 'Data controls' to Settings, add 'Revoke' keys feature in Key Dialog and Data controls
* refactor: reorganize PluginsClient files in langchain format
* feat: use langchain for titling convos
* chore: cleanup titling convo, with fallback to original method, escape braces, use only snippet for language detection
* refactor: move helper functions to appropriate langchain folders for reusability
* fix: userProvidesKey handling for gptPlugins
* fix: frontend handling of plugins key
* chore: cleanup logging and ts-ignore SSE
* fix: forwardRef misuse in DangerButton
* fix(GoogleConfig/FileUpload): localize errors and simplify validation with zod
* fix: cleanup google logging and fix user provided key handling
* chore: remove titling from google
* chore: removing logging from browser endpoint
* wip: fix menu flicker
* feat: useLocalStorage hook
* feat: add Tooltip for UI
* refactor(EndpointMenu): utilize Tooltip and useLocalStorage, remove old 'New Chat' slide-over
* fix(e2e): use testId for endpoint menu trigger
* chore: final touches to EndpointMenu before future refactor to declutter component
* refactor(localization): change select endpoint to open menu and add translations
* chore: add final prop to error message response
* ci: minor edits to facilitate testing
* ci: new e2e test which tests for new key setting/revoking features
2023-09-06 10:46:27 -04:00
|
|
|
});
|
feat: Accurate Token Usage Tracking & Optional Balance (#1018)
* refactor(Chains/llms): allow passing callbacks
* refactor(BaseClient): accurately count completion tokens as generation only
* refactor(OpenAIClient): remove unused getTokenCountForResponse, pass streaming var and callbacks in initializeLLM
* wip: summary prompt tokens
* refactor(summarizeMessages): new cut-off strategy that generates a better summary by adding context from beginning, truncating the middle, and providing the end
wip: draft out relevant providers and variables for token tracing
* refactor(createLLM): make streaming prop false by default
* chore: remove use of getTokenCountForResponse
* refactor(agents): use BufferMemory as ConversationSummaryBufferMemory token usage not easy to trace
* chore: remove passing of streaming prop, also console log useful vars for tracing
* feat: formatFromLangChain helper function to count tokens for ChatModelStart
* refactor(initializeLLM): add role for LLM tracing
* chore(formatFromLangChain): update JSDoc
* feat(formatMessages): formats langChain messages into OpenAI payload format
* chore: install openai-chat-tokens
* refactor(formatMessage): optimize conditional langChain logic
fix(formatFromLangChain): fix destructuring
* feat: accurate prompt tokens for ChatModelStart before generation
* refactor(handleChatModelStart): move to callbacks dir, use factory function
* refactor(initializeLLM): rename 'role' to 'context'
* feat(Balance/Transaction): new schema/models for tracking token spend
refactor(Key): factor out model export to separate file
* refactor(initializeClient): add req,res objects to client options
* feat: add-balance script to add to an existing users' token balance
refactor(Transaction): use multiplier map/function, return balance update
* refactor(Tx): update enum for tokenType, return 1 for multiplier if no map match
* refactor(Tx): add fair fallback value multiplier incase the config result is undefined
* refactor(Balance): rename 'tokens' to 'tokenCredits'
* feat: balance check, add tx.js for new tx-related methods and tests
* chore(summaryPrompts): update prompt token count
* refactor(callbacks): pass req, res
wip: check balance
* refactor(Tx): make convoId a String type, fix(calculateTokenValue)
* refactor(BaseClient): add conversationId as client prop when assigned
* feat(RunManager): track LLM runs with manager, track token spend from LLM,
refactor(OpenAIClient): use RunManager to create callbacks, pass user prop to langchain api calls
* feat(spendTokens): helper to spend prompt/completion tokens
* feat(checkBalance): add helper to check, log, deny request if balance doesn't have enough funds
refactor(Balance): static check method to return object instead of boolean now
wip(OpenAIClient): implement use of checkBalance
* refactor(initializeLLM): add token buffer to assure summary isn't generated when subsequent payload is too large
refactor(OpenAIClient): add checkBalance
refactor(createStartHandler): add checkBalance
* chore: remove prompt and completion token logging from route handler
* chore(spendTokens): add JSDoc
* feat(logTokenCost): record transactions for basic api calls
* chore(ask/edit): invoke getResponseSender only once per API call
* refactor(ask/edit): pass promptTokens to getIds and include in abort data
* refactor(getIds -> getReqData): rename function
* refactor(Tx): increase value if incomplete message
* feat: record tokenUsage when message is aborted
* refactor: subtract tokens when payload includes function_call
* refactor: add namespace for token_balance
* fix(spendTokens): only execute if corresponding token type amounts are defined
* refactor(checkBalance): throws Error if not enough token credits
* refactor(runTitleChain): pass and use signal, spread object props in create helpers, and use 'call' instead of 'run'
* fix(abortMiddleware): circular dependency, and default to empty string for completionTokens
* fix: properly cancel title requests when there isn't enough tokens to generate
* feat(predictNewSummary): custom chain for summaries to allow signal passing
refactor(summaryBuffer): use new custom chain
* feat(RunManager): add getRunByConversationId method, refactor: remove run and throw llm error on handleLLMError
* refactor(createStartHandler): if summary, add error details to runs
* fix(OpenAIClient): support aborting from summarization & showing error to user
refactor(summarizeMessages): remove unnecessary operations counting summaryPromptTokens and note for alternative, pass signal to summaryBuffer
* refactor(logTokenCost -> recordTokenUsage): rename
* refactor(checkBalance): include promptTokens in errorMessage
* refactor(checkBalance/spendTokens): move to models dir
* fix(createLanguageChain): correctly pass config
* refactor(initializeLLM/title): add tokenBuffer of 150 for balance check
* refactor(openAPIPlugin): pass signal and memory, filter functions by the one being called
* refactor(createStartHandler): add error to run if context is plugins as well
* refactor(RunManager/handleLLMError): throw error immediately if plugins, don't remove run
* refactor(PluginsClient): pass memory and signal to tools, cleanup error handling logic
* chore: use absolute equality for addTitle condition
* refactor(checkBalance): move checkBalance to execute after userMessage and tokenCounts are saved, also make conditional
* style: icon changes to match official
* fix(BaseClient): getTokenCountForResponse -> getTokenCount
* fix(formatLangChainMessages): add kwargs as fallback prop from lc_kwargs, update JSDoc
* refactor(Tx.create): does not update balance if CHECK_BALANCE is not enabled
* fix(e2e/cleanUp): cleanup new collections, import all model methods from index
* fix(config/add-balance): add uncaughtException listener
* fix: circular dependency
* refactor(initializeLLM/checkBalance): append new generations to errorMessage if cost exceeds balance
* fix(handleResponseMessage): only record token usage in this method if not error and completion is not skipped
* fix(createStartHandler): correct condition for generations
* chore: bump postcss due to moderate severity vulnerability
* chore: bump zod due to low severity vulnerability
* chore: bump openai & data-provider version
* feat(types): OpenAI Message types
* chore: update bun lockfile
* refactor(CodeBlock): add error block formatting
* refactor(utils/Plugin): factor out formatJSON and cn to separate files (json.ts and cn.ts), add extractJSON
* chore(logViolation): delete user_id after error is logged
* refactor(getMessageError -> Error): change to React.FC, add token_balance handling, use extractJSON to determine JSON instead of regex
* fix(DALL-E): use latest openai SDK
* chore: reorganize imports, fix type issue
* feat(server): add balance route
* fix(api/models): add auth
* feat(data-provider): /api/balance query
* feat: show balance if checking is enabled, refetch on final message or error
* chore: update docs, .env.example with token_usage info, add balance script command
* fix(Balance): fallback to empty obj for balance query
* style: slight adjustment of balance element
* docs(token_usage): add PR notes
2023-10-05 18:34:10 -04:00
|
|
|
const createTitleChain = ({ convo, ...config }) => {
|
refactor: Encrypt & Expire User Provided Keys, feat: Rate Limiting (#874)
* docs: make_your_own.md formatting fix for mkdocs
* feat: add express-mongo-sanitize
feat: add login/registration rate limiting
* chore: remove unnecessary console log
* wip: remove token handling from localStorage to encrypted DB solution
* refactor: minor change to UserService
* fix mongo query and add keys route to server
* fix backend controllers and simplify schema/crud
* refactor: rename token to key to separate from access/refresh tokens, setTokenDialog -> setKeyDialog
* refactor(schemas): TEndpointOption token -> key
* refactor(api): use new encrypted key retrieval system
* fix(SetKeyDialog): fix key prop error
* fix(abortMiddleware): pass random UUID if messageId is not generated yet for proper error display on frontend
* fix(getUserKey): wrong prop passed in arg, adds error handling
* fix: prevent message without conversationId from saving to DB, prevents branching on the frontend to a new top-level branch
* refactor: change wording of multiple display messages
* refactor(checkExpiry -> checkUserKeyExpiry): move to UserService file
* fix: type imports from common
* refactor(SubmitButton): convert to TS
* refactor(key.ts): change localStorage map key name
* refactor: add new custom tailwind classes to better match openAI colors
* chore: remove unnecessary warning and catch ScreenShot error
* refactor: move userKey frontend logic to hooks and remove use of localStorage and instead query the DB
* refactor: invalidate correct query key, memoize userKey hook, conditionally render SetKeyDialog to avoid unnecessary calls, refactor SubmitButton props and useEffect for showing 'provide key first'
* fix(SetKeyDialog): use enum-like object for expiry values
feat(Dropdown): add optionsClassName to dynamically change dropdown options container classes
* fix: handle edge case where user had provided a key but the server changes to env variable for keys
* refactor(OpenAI/titleConvo): move titling to client to retain authorized credentials in message lifecycle for titling
* fix(azure): handle user_provided keys correctly for azure
* feat: send user Id to OpenAI to differentiate users in completion requests
* refactor(OpenAI/titleConvo): adding tokens helps minimize LLM from using the language in title response
* feat: add delete endpoint for keys
* chore: remove throttling of title
* feat: add 'Data controls' to Settings, add 'Revoke' keys feature in Key Dialog and Data controls
* refactor: reorganize PluginsClient files in langchain format
* feat: use langchain for titling convos
* chore: cleanup titling convo, with fallback to original method, escape braces, use only snippet for language detection
* refactor: move helper functions to appropriate langchain folders for reusability
* fix: userProvidesKey handling for gptPlugins
* fix: frontend handling of plugins key
* chore: cleanup logging and ts-ignore SSE
* fix: forwardRef misuse in DangerButton
* fix(GoogleConfig/FileUpload): localize errors and simplify validation with zod
* fix: cleanup google logging and fix user provided key handling
* chore: remove titling from google
* chore: removing logging from browser endpoint
* wip: fix menu flicker
* feat: useLocalStorage hook
* feat: add Tooltip for UI
* refactor(EndpointMenu): utilize Tooltip and useLocalStorage, remove old 'New Chat' slide-over
* fix(e2e): use testId for endpoint menu trigger
* chore: final touches to EndpointMenu before future refactor to declutter component
* refactor(localization): change select endpoint to open menu and add translations
* chore: add final prop to error message response
* ci: minor edits to facilitate testing
* ci: new e2e test which tests for new key setting/revoking features
2023-09-06 10:46:27 -04:00
|
|
|
const titlePrompt = createTitlePrompt({ convo });
|
|
|
|
|
return createStructuredOutputChainFromZod(titleSchema, {
|
|
|
|
|
prompt: titlePrompt,
|
feat: Accurate Token Usage Tracking & Optional Balance (#1018)
* refactor(Chains/llms): allow passing callbacks
* refactor(BaseClient): accurately count completion tokens as generation only
* refactor(OpenAIClient): remove unused getTokenCountForResponse, pass streaming var and callbacks in initializeLLM
* wip: summary prompt tokens
* refactor(summarizeMessages): new cut-off strategy that generates a better summary by adding context from beginning, truncating the middle, and providing the end
wip: draft out relevant providers and variables for token tracing
* refactor(createLLM): make streaming prop false by default
* chore: remove use of getTokenCountForResponse
* refactor(agents): use BufferMemory as ConversationSummaryBufferMemory token usage not easy to trace
* chore: remove passing of streaming prop, also console log useful vars for tracing
* feat: formatFromLangChain helper function to count tokens for ChatModelStart
* refactor(initializeLLM): add role for LLM tracing
* chore(formatFromLangChain): update JSDoc
* feat(formatMessages): formats langChain messages into OpenAI payload format
* chore: install openai-chat-tokens
* refactor(formatMessage): optimize conditional langChain logic
fix(formatFromLangChain): fix destructuring
* feat: accurate prompt tokens for ChatModelStart before generation
* refactor(handleChatModelStart): move to callbacks dir, use factory function
* refactor(initializeLLM): rename 'role' to 'context'
* feat(Balance/Transaction): new schema/models for tracking token spend
refactor(Key): factor out model export to separate file
* refactor(initializeClient): add req,res objects to client options
* feat: add-balance script to add to an existing users' token balance
refactor(Transaction): use multiplier map/function, return balance update
* refactor(Tx): update enum for tokenType, return 1 for multiplier if no map match
* refactor(Tx): add fair fallback value multiplier incase the config result is undefined
* refactor(Balance): rename 'tokens' to 'tokenCredits'
* feat: balance check, add tx.js for new tx-related methods and tests
* chore(summaryPrompts): update prompt token count
* refactor(callbacks): pass req, res
wip: check balance
* refactor(Tx): make convoId a String type, fix(calculateTokenValue)
* refactor(BaseClient): add conversationId as client prop when assigned
* feat(RunManager): track LLM runs with manager, track token spend from LLM,
refactor(OpenAIClient): use RunManager to create callbacks, pass user prop to langchain api calls
* feat(spendTokens): helper to spend prompt/completion tokens
* feat(checkBalance): add helper to check, log, deny request if balance doesn't have enough funds
refactor(Balance): static check method to return object instead of boolean now
wip(OpenAIClient): implement use of checkBalance
* refactor(initializeLLM): add token buffer to assure summary isn't generated when subsequent payload is too large
refactor(OpenAIClient): add checkBalance
refactor(createStartHandler): add checkBalance
* chore: remove prompt and completion token logging from route handler
* chore(spendTokens): add JSDoc
* feat(logTokenCost): record transactions for basic api calls
* chore(ask/edit): invoke getResponseSender only once per API call
* refactor(ask/edit): pass promptTokens to getIds and include in abort data
* refactor(getIds -> getReqData): rename function
* refactor(Tx): increase value if incomplete message
* feat: record tokenUsage when message is aborted
* refactor: subtract tokens when payload includes function_call
* refactor: add namespace for token_balance
* fix(spendTokens): only execute if corresponding token type amounts are defined
* refactor(checkBalance): throws Error if not enough token credits
* refactor(runTitleChain): pass and use signal, spread object props in create helpers, and use 'call' instead of 'run'
* fix(abortMiddleware): circular dependency, and default to empty string for completionTokens
* fix: properly cancel title requests when there isn't enough tokens to generate
* feat(predictNewSummary): custom chain for summaries to allow signal passing
refactor(summaryBuffer): use new custom chain
* feat(RunManager): add getRunByConversationId method, refactor: remove run and throw llm error on handleLLMError
* refactor(createStartHandler): if summary, add error details to runs
* fix(OpenAIClient): support aborting from summarization & showing error to user
refactor(summarizeMessages): remove unnecessary operations counting summaryPromptTokens and note for alternative, pass signal to summaryBuffer
* refactor(logTokenCost -> recordTokenUsage): rename
* refactor(checkBalance): include promptTokens in errorMessage
* refactor(checkBalance/spendTokens): move to models dir
* fix(createLanguageChain): correctly pass config
* refactor(initializeLLM/title): add tokenBuffer of 150 for balance check
* refactor(openAPIPlugin): pass signal and memory, filter functions by the one being called
* refactor(createStartHandler): add error to run if context is plugins as well
* refactor(RunManager/handleLLMError): throw error immediately if plugins, don't remove run
* refactor(PluginsClient): pass memory and signal to tools, cleanup error handling logic
* chore: use absolute equality for addTitle condition
* refactor(checkBalance): move checkBalance to execute after userMessage and tokenCounts are saved, also make conditional
* style: icon changes to match official
* fix(BaseClient): getTokenCountForResponse -> getTokenCount
* fix(formatLangChainMessages): add kwargs as fallback prop from lc_kwargs, update JSDoc
* refactor(Tx.create): does not update balance if CHECK_BALANCE is not enabled
* fix(e2e/cleanUp): cleanup new collections, import all model methods from index
* fix(config/add-balance): add uncaughtException listener
* fix: circular dependency
* refactor(initializeLLM/checkBalance): append new generations to errorMessage if cost exceeds balance
* fix(handleResponseMessage): only record token usage in this method if not error and completion is not skipped
* fix(createStartHandler): correct condition for generations
* chore: bump postcss due to moderate severity vulnerability
* chore: bump zod due to low severity vulnerability
* chore: bump openai & data-provider version
* feat(types): OpenAI Message types
* chore: update bun lockfile
* refactor(CodeBlock): add error block formatting
* refactor(utils/Plugin): factor out formatJSON and cn to separate files (json.ts and cn.ts), add extractJSON
* chore(logViolation): delete user_id after error is logged
* refactor(getMessageError -> Error): change to React.FC, add token_balance handling, use extractJSON to determine JSON instead of regex
* fix(DALL-E): use latest openai SDK
* chore: reorganize imports, fix type issue
* feat(server): add balance route
* fix(api/models): add auth
* feat(data-provider): /api/balance query
* feat: show balance if checking is enabled, refetch on final message or error
* chore: update docs, .env.example with token_usage info, add balance script command
* fix(Balance): fallback to empty obj for balance query
* style: slight adjustment of balance element
* docs(token_usage): add PR notes
2023-10-05 18:34:10 -04:00
|
|
|
...config,
|
refactor: Encrypt & Expire User Provided Keys, feat: Rate Limiting (#874)
* docs: make_your_own.md formatting fix for mkdocs
* feat: add express-mongo-sanitize
feat: add login/registration rate limiting
* chore: remove unnecessary console log
* wip: remove token handling from localStorage to encrypted DB solution
* refactor: minor change to UserService
* fix mongo query and add keys route to server
* fix backend controllers and simplify schema/crud
* refactor: rename token to key to separate from access/refresh tokens, setTokenDialog -> setKeyDialog
* refactor(schemas): TEndpointOption token -> key
* refactor(api): use new encrypted key retrieval system
* fix(SetKeyDialog): fix key prop error
* fix(abortMiddleware): pass random UUID if messageId is not generated yet for proper error display on frontend
* fix(getUserKey): wrong prop passed in arg, adds error handling
* fix: prevent message without conversationId from saving to DB, prevents branching on the frontend to a new top-level branch
* refactor: change wording of multiple display messages
* refactor(checkExpiry -> checkUserKeyExpiry): move to UserService file
* fix: type imports from common
* refactor(SubmitButton): convert to TS
* refactor(key.ts): change localStorage map key name
* refactor: add new custom tailwind classes to better match openAI colors
* chore: remove unnecessary warning and catch ScreenShot error
* refactor: move userKey frontend logic to hooks and remove use of localStorage and instead query the DB
* refactor: invalidate correct query key, memoize userKey hook, conditionally render SetKeyDialog to avoid unnecessary calls, refactor SubmitButton props and useEffect for showing 'provide key first'
* fix(SetKeyDialog): use enum-like object for expiry values
feat(Dropdown): add optionsClassName to dynamically change dropdown options container classes
* fix: handle edge case where user had provided a key but the server changes to env variable for keys
* refactor(OpenAI/titleConvo): move titling to client to retain authorized credentials in message lifecycle for titling
* fix(azure): handle user_provided keys correctly for azure
* feat: send user Id to OpenAI to differentiate users in completion requests
* refactor(OpenAI/titleConvo): adding tokens helps minimize LLM from using the language in title response
* feat: add delete endpoint for keys
* chore: remove throttling of title
* feat: add 'Data controls' to Settings, add 'Revoke' keys feature in Key Dialog and Data controls
* refactor: reorganize PluginsClient files in langchain format
* feat: use langchain for titling convos
* chore: cleanup titling convo, with fallback to original method, escape braces, use only snippet for language detection
* refactor: move helper functions to appropriate langchain folders for reusability
* fix: userProvidesKey handling for gptPlugins
* fix: frontend handling of plugins key
* chore: cleanup logging and ts-ignore SSE
* fix: forwardRef misuse in DangerButton
* fix(GoogleConfig/FileUpload): localize errors and simplify validation with zod
* fix: cleanup google logging and fix user provided key handling
* chore: remove titling from google
* chore: removing logging from browser endpoint
* wip: fix menu flicker
* feat: useLocalStorage hook
* feat: add Tooltip for UI
* refactor(EndpointMenu): utilize Tooltip and useLocalStorage, remove old 'New Chat' slide-over
* fix(e2e): use testId for endpoint menu trigger
* chore: final touches to EndpointMenu before future refactor to declutter component
* refactor(localization): change select endpoint to open menu and add translations
* chore: add final prop to error message response
* ci: minor edits to facilitate testing
* ci: new e2e test which tests for new key setting/revoking features
2023-09-06 10:46:27 -04:00
|
|
|
// verbose: true,
|
|
|
|
|
});
|
|
|
|
|
};
|
|
|
|
|
|
feat: Accurate Token Usage Tracking & Optional Balance (#1018)
* refactor(Chains/llms): allow passing callbacks
* refactor(BaseClient): accurately count completion tokens as generation only
* refactor(OpenAIClient): remove unused getTokenCountForResponse, pass streaming var and callbacks in initializeLLM
* wip: summary prompt tokens
* refactor(summarizeMessages): new cut-off strategy that generates a better summary by adding context from beginning, truncating the middle, and providing the end
wip: draft out relevant providers and variables for token tracing
* refactor(createLLM): make streaming prop false by default
* chore: remove use of getTokenCountForResponse
* refactor(agents): use BufferMemory as ConversationSummaryBufferMemory token usage not easy to trace
* chore: remove passing of streaming prop, also console log useful vars for tracing
* feat: formatFromLangChain helper function to count tokens for ChatModelStart
* refactor(initializeLLM): add role for LLM tracing
* chore(formatFromLangChain): update JSDoc
* feat(formatMessages): formats langChain messages into OpenAI payload format
* chore: install openai-chat-tokens
* refactor(formatMessage): optimize conditional langChain logic
fix(formatFromLangChain): fix destructuring
* feat: accurate prompt tokens for ChatModelStart before generation
* refactor(handleChatModelStart): move to callbacks dir, use factory function
* refactor(initializeLLM): rename 'role' to 'context'
* feat(Balance/Transaction): new schema/models for tracking token spend
refactor(Key): factor out model export to separate file
* refactor(initializeClient): add req,res objects to client options
* feat: add-balance script to add to an existing users' token balance
refactor(Transaction): use multiplier map/function, return balance update
* refactor(Tx): update enum for tokenType, return 1 for multiplier if no map match
* refactor(Tx): add fair fallback value multiplier incase the config result is undefined
* refactor(Balance): rename 'tokens' to 'tokenCredits'
* feat: balance check, add tx.js for new tx-related methods and tests
* chore(summaryPrompts): update prompt token count
* refactor(callbacks): pass req, res
wip: check balance
* refactor(Tx): make convoId a String type, fix(calculateTokenValue)
* refactor(BaseClient): add conversationId as client prop when assigned
* feat(RunManager): track LLM runs with manager, track token spend from LLM,
refactor(OpenAIClient): use RunManager to create callbacks, pass user prop to langchain api calls
* feat(spendTokens): helper to spend prompt/completion tokens
* feat(checkBalance): add helper to check, log, deny request if balance doesn't have enough funds
refactor(Balance): static check method to return object instead of boolean now
wip(OpenAIClient): implement use of checkBalance
* refactor(initializeLLM): add token buffer to assure summary isn't generated when subsequent payload is too large
refactor(OpenAIClient): add checkBalance
refactor(createStartHandler): add checkBalance
* chore: remove prompt and completion token logging from route handler
* chore(spendTokens): add JSDoc
* feat(logTokenCost): record transactions for basic api calls
* chore(ask/edit): invoke getResponseSender only once per API call
* refactor(ask/edit): pass promptTokens to getIds and include in abort data
* refactor(getIds -> getReqData): rename function
* refactor(Tx): increase value if incomplete message
* feat: record tokenUsage when message is aborted
* refactor: subtract tokens when payload includes function_call
* refactor: add namespace for token_balance
* fix(spendTokens): only execute if corresponding token type amounts are defined
* refactor(checkBalance): throws Error if not enough token credits
* refactor(runTitleChain): pass and use signal, spread object props in create helpers, and use 'call' instead of 'run'
* fix(abortMiddleware): circular dependency, and default to empty string for completionTokens
* fix: properly cancel title requests when there isn't enough tokens to generate
* feat(predictNewSummary): custom chain for summaries to allow signal passing
refactor(summaryBuffer): use new custom chain
* feat(RunManager): add getRunByConversationId method, refactor: remove run and throw llm error on handleLLMError
* refactor(createStartHandler): if summary, add error details to runs
* fix(OpenAIClient): support aborting from summarization & showing error to user
refactor(summarizeMessages): remove unnecessary operations counting summaryPromptTokens and note for alternative, pass signal to summaryBuffer
* refactor(logTokenCost -> recordTokenUsage): rename
* refactor(checkBalance): include promptTokens in errorMessage
* refactor(checkBalance/spendTokens): move to models dir
* fix(createLanguageChain): correctly pass config
* refactor(initializeLLM/title): add tokenBuffer of 150 for balance check
* refactor(openAPIPlugin): pass signal and memory, filter functions by the one being called
* refactor(createStartHandler): add error to run if context is plugins as well
* refactor(RunManager/handleLLMError): throw error immediately if plugins, don't remove run
* refactor(PluginsClient): pass memory and signal to tools, cleanup error handling logic
* chore: use absolute equality for addTitle condition
* refactor(checkBalance): move checkBalance to execute after userMessage and tokenCounts are saved, also make conditional
* style: icon changes to match official
* fix(BaseClient): getTokenCountForResponse -> getTokenCount
* fix(formatLangChainMessages): add kwargs as fallback prop from lc_kwargs, update JSDoc
* refactor(Tx.create): does not update balance if CHECK_BALANCE is not enabled
* fix(e2e/cleanUp): cleanup new collections, import all model methods from index
* fix(config/add-balance): add uncaughtException listener
* fix: circular dependency
* refactor(initializeLLM/checkBalance): append new generations to errorMessage if cost exceeds balance
* fix(handleResponseMessage): only record token usage in this method if not error and completion is not skipped
* fix(createStartHandler): correct condition for generations
* chore: bump postcss due to moderate severity vulnerability
* chore: bump zod due to low severity vulnerability
* chore: bump openai & data-provider version
* feat(types): OpenAI Message types
* chore: update bun lockfile
* refactor(CodeBlock): add error block formatting
* refactor(utils/Plugin): factor out formatJSON and cn to separate files (json.ts and cn.ts), add extractJSON
* chore(logViolation): delete user_id after error is logged
* refactor(getMessageError -> Error): change to React.FC, add token_balance handling, use extractJSON to determine JSON instead of regex
* fix(DALL-E): use latest openai SDK
* chore: reorganize imports, fix type issue
* feat(server): add balance route
* fix(api/models): add auth
* feat(data-provider): /api/balance query
* feat: show balance if checking is enabled, refetch on final message or error
* chore: update docs, .env.example with token_usage info, add balance script command
* fix(Balance): fallback to empty obj for balance query
* style: slight adjustment of balance element
* docs(token_usage): add PR notes
2023-10-05 18:34:10 -04:00
|
|
|
const runTitleChain = async ({ llm, text, convo, signal, callbacks }) => {
|
refactor: Encrypt & Expire User Provided Keys, feat: Rate Limiting (#874)
* docs: make_your_own.md formatting fix for mkdocs
* feat: add express-mongo-sanitize
feat: add login/registration rate limiting
* chore: remove unnecessary console log
* wip: remove token handling from localStorage to encrypted DB solution
* refactor: minor change to UserService
* fix mongo query and add keys route to server
* fix backend controllers and simplify schema/crud
* refactor: rename token to key to separate from access/refresh tokens, setTokenDialog -> setKeyDialog
* refactor(schemas): TEndpointOption token -> key
* refactor(api): use new encrypted key retrieval system
* fix(SetKeyDialog): fix key prop error
* fix(abortMiddleware): pass random UUID if messageId is not generated yet for proper error display on frontend
* fix(getUserKey): wrong prop passed in arg, adds error handling
* fix: prevent message without conversationId from saving to DB, prevents branching on the frontend to a new top-level branch
* refactor: change wording of multiple display messages
* refactor(checkExpiry -> checkUserKeyExpiry): move to UserService file
* fix: type imports from common
* refactor(SubmitButton): convert to TS
* refactor(key.ts): change localStorage map key name
* refactor: add new custom tailwind classes to better match openAI colors
* chore: remove unnecessary warning and catch ScreenShot error
* refactor: move userKey frontend logic to hooks and remove use of localStorage and instead query the DB
* refactor: invalidate correct query key, memoize userKey hook, conditionally render SetKeyDialog to avoid unnecessary calls, refactor SubmitButton props and useEffect for showing 'provide key first'
* fix(SetKeyDialog): use enum-like object for expiry values
feat(Dropdown): add optionsClassName to dynamically change dropdown options container classes
* fix: handle edge case where user had provided a key but the server changes to env variable for keys
* refactor(OpenAI/titleConvo): move titling to client to retain authorized credentials in message lifecycle for titling
* fix(azure): handle user_provided keys correctly for azure
* feat: send user Id to OpenAI to differentiate users in completion requests
* refactor(OpenAI/titleConvo): adding tokens helps minimize LLM from using the language in title response
* feat: add delete endpoint for keys
* chore: remove throttling of title
* feat: add 'Data controls' to Settings, add 'Revoke' keys feature in Key Dialog and Data controls
* refactor: reorganize PluginsClient files in langchain format
* feat: use langchain for titling convos
* chore: cleanup titling convo, with fallback to original method, escape braces, use only snippet for language detection
* refactor: move helper functions to appropriate langchain folders for reusability
* fix: userProvidesKey handling for gptPlugins
* fix: frontend handling of plugins key
* chore: cleanup logging and ts-ignore SSE
* fix: forwardRef misuse in DangerButton
* fix(GoogleConfig/FileUpload): localize errors and simplify validation with zod
* fix: cleanup google logging and fix user provided key handling
* chore: remove titling from google
* chore: removing logging from browser endpoint
* wip: fix menu flicker
* feat: useLocalStorage hook
* feat: add Tooltip for UI
* refactor(EndpointMenu): utilize Tooltip and useLocalStorage, remove old 'New Chat' slide-over
* fix(e2e): use testId for endpoint menu trigger
* chore: final touches to EndpointMenu before future refactor to declutter component
* refactor(localization): change select endpoint to open menu and add translations
* chore: add final prop to error message response
* ci: minor edits to facilitate testing
* ci: new e2e test which tests for new key setting/revoking features
2023-09-06 10:46:27 -04:00
|
|
|
let snippet = text;
|
|
|
|
|
try {
|
|
|
|
|
snippet = getSnippet(text);
|
|
|
|
|
} catch (e) {
|
2023-12-14 07:49:27 -05:00
|
|
|
logger.error('[runTitleChain] Error getting snippet of text for titleChain', e);
|
refactor: Encrypt & Expire User Provided Keys, feat: Rate Limiting (#874)
* docs: make_your_own.md formatting fix for mkdocs
* feat: add express-mongo-sanitize
feat: add login/registration rate limiting
* chore: remove unnecessary console log
* wip: remove token handling from localStorage to encrypted DB solution
* refactor: minor change to UserService
* fix mongo query and add keys route to server
* fix backend controllers and simplify schema/crud
* refactor: rename token to key to separate from access/refresh tokens, setTokenDialog -> setKeyDialog
* refactor(schemas): TEndpointOption token -> key
* refactor(api): use new encrypted key retrieval system
* fix(SetKeyDialog): fix key prop error
* fix(abortMiddleware): pass random UUID if messageId is not generated yet for proper error display on frontend
* fix(getUserKey): wrong prop passed in arg, adds error handling
* fix: prevent message without conversationId from saving to DB, prevents branching on the frontend to a new top-level branch
* refactor: change wording of multiple display messages
* refactor(checkExpiry -> checkUserKeyExpiry): move to UserService file
* fix: type imports from common
* refactor(SubmitButton): convert to TS
* refactor(key.ts): change localStorage map key name
* refactor: add new custom tailwind classes to better match openAI colors
* chore: remove unnecessary warning and catch ScreenShot error
* refactor: move userKey frontend logic to hooks and remove use of localStorage and instead query the DB
* refactor: invalidate correct query key, memoize userKey hook, conditionally render SetKeyDialog to avoid unnecessary calls, refactor SubmitButton props and useEffect for showing 'provide key first'
* fix(SetKeyDialog): use enum-like object for expiry values
feat(Dropdown): add optionsClassName to dynamically change dropdown options container classes
* fix: handle edge case where user had provided a key but the server changes to env variable for keys
* refactor(OpenAI/titleConvo): move titling to client to retain authorized credentials in message lifecycle for titling
* fix(azure): handle user_provided keys correctly for azure
* feat: send user Id to OpenAI to differentiate users in completion requests
* refactor(OpenAI/titleConvo): adding tokens helps minimize LLM from using the language in title response
* feat: add delete endpoint for keys
* chore: remove throttling of title
* feat: add 'Data controls' to Settings, add 'Revoke' keys feature in Key Dialog and Data controls
* refactor: reorganize PluginsClient files in langchain format
* feat: use langchain for titling convos
* chore: cleanup titling convo, with fallback to original method, escape braces, use only snippet for language detection
* refactor: move helper functions to appropriate langchain folders for reusability
* fix: userProvidesKey handling for gptPlugins
* fix: frontend handling of plugins key
* chore: cleanup logging and ts-ignore SSE
* fix: forwardRef misuse in DangerButton
* fix(GoogleConfig/FileUpload): localize errors and simplify validation with zod
* fix: cleanup google logging and fix user provided key handling
* chore: remove titling from google
* chore: removing logging from browser endpoint
* wip: fix menu flicker
* feat: useLocalStorage hook
* feat: add Tooltip for UI
* refactor(EndpointMenu): utilize Tooltip and useLocalStorage, remove old 'New Chat' slide-over
* fix(e2e): use testId for endpoint menu trigger
* chore: final touches to EndpointMenu before future refactor to declutter component
* refactor(localization): change select endpoint to open menu and add translations
* chore: add final prop to error message response
* ci: minor edits to facilitate testing
* ci: new e2e test which tests for new key setting/revoking features
2023-09-06 10:46:27 -04:00
|
|
|
}
|
feat: Accurate Token Usage Tracking & Optional Balance (#1018)
* refactor(Chains/llms): allow passing callbacks
* refactor(BaseClient): accurately count completion tokens as generation only
* refactor(OpenAIClient): remove unused getTokenCountForResponse, pass streaming var and callbacks in initializeLLM
* wip: summary prompt tokens
* refactor(summarizeMessages): new cut-off strategy that generates a better summary by adding context from beginning, truncating the middle, and providing the end
wip: draft out relevant providers and variables for token tracing
* refactor(createLLM): make streaming prop false by default
* chore: remove use of getTokenCountForResponse
* refactor(agents): use BufferMemory as ConversationSummaryBufferMemory token usage not easy to trace
* chore: remove passing of streaming prop, also console log useful vars for tracing
* feat: formatFromLangChain helper function to count tokens for ChatModelStart
* refactor(initializeLLM): add role for LLM tracing
* chore(formatFromLangChain): update JSDoc
* feat(formatMessages): formats langChain messages into OpenAI payload format
* chore: install openai-chat-tokens
* refactor(formatMessage): optimize conditional langChain logic
fix(formatFromLangChain): fix destructuring
* feat: accurate prompt tokens for ChatModelStart before generation
* refactor(handleChatModelStart): move to callbacks dir, use factory function
* refactor(initializeLLM): rename 'role' to 'context'
* feat(Balance/Transaction): new schema/models for tracking token spend
refactor(Key): factor out model export to separate file
* refactor(initializeClient): add req,res objects to client options
* feat: add-balance script to add to an existing users' token balance
refactor(Transaction): use multiplier map/function, return balance update
* refactor(Tx): update enum for tokenType, return 1 for multiplier if no map match
* refactor(Tx): add fair fallback value multiplier incase the config result is undefined
* refactor(Balance): rename 'tokens' to 'tokenCredits'
* feat: balance check, add tx.js for new tx-related methods and tests
* chore(summaryPrompts): update prompt token count
* refactor(callbacks): pass req, res
wip: check balance
* refactor(Tx): make convoId a String type, fix(calculateTokenValue)
* refactor(BaseClient): add conversationId as client prop when assigned
* feat(RunManager): track LLM runs with manager, track token spend from LLM,
refactor(OpenAIClient): use RunManager to create callbacks, pass user prop to langchain api calls
* feat(spendTokens): helper to spend prompt/completion tokens
* feat(checkBalance): add helper to check, log, deny request if balance doesn't have enough funds
refactor(Balance): static check method to return object instead of boolean now
wip(OpenAIClient): implement use of checkBalance
* refactor(initializeLLM): add token buffer to assure summary isn't generated when subsequent payload is too large
refactor(OpenAIClient): add checkBalance
refactor(createStartHandler): add checkBalance
* chore: remove prompt and completion token logging from route handler
* chore(spendTokens): add JSDoc
* feat(logTokenCost): record transactions for basic api calls
* chore(ask/edit): invoke getResponseSender only once per API call
* refactor(ask/edit): pass promptTokens to getIds and include in abort data
* refactor(getIds -> getReqData): rename function
* refactor(Tx): increase value if incomplete message
* feat: record tokenUsage when message is aborted
* refactor: subtract tokens when payload includes function_call
* refactor: add namespace for token_balance
* fix(spendTokens): only execute if corresponding token type amounts are defined
* refactor(checkBalance): throws Error if not enough token credits
* refactor(runTitleChain): pass and use signal, spread object props in create helpers, and use 'call' instead of 'run'
* fix(abortMiddleware): circular dependency, and default to empty string for completionTokens
* fix: properly cancel title requests when there isn't enough tokens to generate
* feat(predictNewSummary): custom chain for summaries to allow signal passing
refactor(summaryBuffer): use new custom chain
* feat(RunManager): add getRunByConversationId method, refactor: remove run and throw llm error on handleLLMError
* refactor(createStartHandler): if summary, add error details to runs
* fix(OpenAIClient): support aborting from summarization & showing error to user
refactor(summarizeMessages): remove unnecessary operations counting summaryPromptTokens and note for alternative, pass signal to summaryBuffer
* refactor(logTokenCost -> recordTokenUsage): rename
* refactor(checkBalance): include promptTokens in errorMessage
* refactor(checkBalance/spendTokens): move to models dir
* fix(createLanguageChain): correctly pass config
* refactor(initializeLLM/title): add tokenBuffer of 150 for balance check
* refactor(openAPIPlugin): pass signal and memory, filter functions by the one being called
* refactor(createStartHandler): add error to run if context is plugins as well
* refactor(RunManager/handleLLMError): throw error immediately if plugins, don't remove run
* refactor(PluginsClient): pass memory and signal to tools, cleanup error handling logic
* chore: use absolute equality for addTitle condition
* refactor(checkBalance): move checkBalance to execute after userMessage and tokenCounts are saved, also make conditional
* style: icon changes to match official
* fix(BaseClient): getTokenCountForResponse -> getTokenCount
* fix(formatLangChainMessages): add kwargs as fallback prop from lc_kwargs, update JSDoc
* refactor(Tx.create): does not update balance if CHECK_BALANCE is not enabled
* fix(e2e/cleanUp): cleanup new collections, import all model methods from index
* fix(config/add-balance): add uncaughtException listener
* fix: circular dependency
* refactor(initializeLLM/checkBalance): append new generations to errorMessage if cost exceeds balance
* fix(handleResponseMessage): only record token usage in this method if not error and completion is not skipped
* fix(createStartHandler): correct condition for generations
* chore: bump postcss due to moderate severity vulnerability
* chore: bump zod due to low severity vulnerability
* chore: bump openai & data-provider version
* feat(types): OpenAI Message types
* chore: update bun lockfile
* refactor(CodeBlock): add error block formatting
* refactor(utils/Plugin): factor out formatJSON and cn to separate files (json.ts and cn.ts), add extractJSON
* chore(logViolation): delete user_id after error is logged
* refactor(getMessageError -> Error): change to React.FC, add token_balance handling, use extractJSON to determine JSON instead of regex
* fix(DALL-E): use latest openai SDK
* chore: reorganize imports, fix type issue
* feat(server): add balance route
* fix(api/models): add auth
* feat(data-provider): /api/balance query
* feat: show balance if checking is enabled, refetch on final message or error
* chore: update docs, .env.example with token_usage info, add balance script command
* fix(Balance): fallback to empty obj for balance query
* style: slight adjustment of balance element
* docs(token_usage): add PR notes
2023-10-05 18:34:10 -04:00
|
|
|
const languageChain = createLanguageChain({ llm, callbacks });
|
|
|
|
|
const titleChain = createTitleChain({ llm, callbacks, convo: escapeBraces(convo) });
|
|
|
|
|
const { language } = (await languageChain.call({ inputText: snippet, signal })).output;
|
|
|
|
|
return (await titleChain.call({ language, signal })).output.title;
|
refactor: Encrypt & Expire User Provided Keys, feat: Rate Limiting (#874)
* docs: make_your_own.md formatting fix for mkdocs
* feat: add express-mongo-sanitize
feat: add login/registration rate limiting
* chore: remove unnecessary console log
* wip: remove token handling from localStorage to encrypted DB solution
* refactor: minor change to UserService
* fix mongo query and add keys route to server
* fix backend controllers and simplify schema/crud
* refactor: rename token to key to separate from access/refresh tokens, setTokenDialog -> setKeyDialog
* refactor(schemas): TEndpointOption token -> key
* refactor(api): use new encrypted key retrieval system
* fix(SetKeyDialog): fix key prop error
* fix(abortMiddleware): pass random UUID if messageId is not generated yet for proper error display on frontend
* fix(getUserKey): wrong prop passed in arg, adds error handling
* fix: prevent message without conversationId from saving to DB, prevents branching on the frontend to a new top-level branch
* refactor: change wording of multiple display messages
* refactor(checkExpiry -> checkUserKeyExpiry): move to UserService file
* fix: type imports from common
* refactor(SubmitButton): convert to TS
* refactor(key.ts): change localStorage map key name
* refactor: add new custom tailwind classes to better match openAI colors
* chore: remove unnecessary warning and catch ScreenShot error
* refactor: move userKey frontend logic to hooks and remove use of localStorage and instead query the DB
* refactor: invalidate correct query key, memoize userKey hook, conditionally render SetKeyDialog to avoid unnecessary calls, refactor SubmitButton props and useEffect for showing 'provide key first'
* fix(SetKeyDialog): use enum-like object for expiry values
feat(Dropdown): add optionsClassName to dynamically change dropdown options container classes
* fix: handle edge case where user had provided a key but the server changes to env variable for keys
* refactor(OpenAI/titleConvo): move titling to client to retain authorized credentials in message lifecycle for titling
* fix(azure): handle user_provided keys correctly for azure
* feat: send user Id to OpenAI to differentiate users in completion requests
* refactor(OpenAI/titleConvo): adding tokens helps minimize LLM from using the language in title response
* feat: add delete endpoint for keys
* chore: remove throttling of title
* feat: add 'Data controls' to Settings, add 'Revoke' keys feature in Key Dialog and Data controls
* refactor: reorganize PluginsClient files in langchain format
* feat: use langchain for titling convos
* chore: cleanup titling convo, with fallback to original method, escape braces, use only snippet for language detection
* refactor: move helper functions to appropriate langchain folders for reusability
* fix: userProvidesKey handling for gptPlugins
* fix: frontend handling of plugins key
* chore: cleanup logging and ts-ignore SSE
* fix: forwardRef misuse in DangerButton
* fix(GoogleConfig/FileUpload): localize errors and simplify validation with zod
* fix: cleanup google logging and fix user provided key handling
* chore: remove titling from google
* chore: removing logging from browser endpoint
* wip: fix menu flicker
* feat: useLocalStorage hook
* feat: add Tooltip for UI
* refactor(EndpointMenu): utilize Tooltip and useLocalStorage, remove old 'New Chat' slide-over
* fix(e2e): use testId for endpoint menu trigger
* chore: final touches to EndpointMenu before future refactor to declutter component
* refactor(localization): change select endpoint to open menu and add translations
* chore: add final prop to error message response
* ci: minor edits to facilitate testing
* ci: new e2e test which tests for new key setting/revoking features
2023-09-06 10:46:27 -04:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
module.exports = runTitleChain;
|